migrating to amazon rds via xtrabackup - percona t… · aws aurora features ... migrating to...
TRANSCRIPT
Santa Clara California | April 23th ndash 25th 2018
Migrating to Amazon RDS via Xtrabackup
2
Who are we
Agustiacuten Gallego Support Engineer - Percona
Alexander Rubin Principal Consultant - Percona
3
Agenda
XtraBackup Amazon Relational Database Service (RDS) Pros and Cons of Migrating to RDS Migrating to RDS via XtraBackup Limitations
XtraBackup
5
Introduction to XtraBackup
XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also
supported Implements functionality in MySQL
Enterprise Backup (InnoDB Hot Backup) and more
6
Introduction to XtraBackup
Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only
DEB and RPM packages available Generic Linux tarball and source code
Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions
7
There are three main phases Backup
Copies all files needed Apply logs
Performs a crash recovery to leave in a consistent state Copy back
Moves the files to their final destination
Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this
How does it work
8
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull
Full backup - example
9
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed
Compressed backup - example 1
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
2
Who are we
Agustiacuten Gallego Support Engineer - Percona
Alexander Rubin Principal Consultant - Percona
3
Agenda
XtraBackup Amazon Relational Database Service (RDS) Pros and Cons of Migrating to RDS Migrating to RDS via XtraBackup Limitations
XtraBackup
5
Introduction to XtraBackup
XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also
supported Implements functionality in MySQL
Enterprise Backup (InnoDB Hot Backup) and more
6
Introduction to XtraBackup
Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only
DEB and RPM packages available Generic Linux tarball and source code
Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions
7
There are three main phases Backup
Copies all files needed Apply logs
Performs a crash recovery to leave in a consistent state Copy back
Moves the files to their final destination
Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this
How does it work
8
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull
Full backup - example
9
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed
Compressed backup - example 1
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
3
Agenda
XtraBackup Amazon Relational Database Service (RDS) Pros and Cons of Migrating to RDS Migrating to RDS via XtraBackup Limitations
XtraBackup
5
Introduction to XtraBackup
XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also
supported Implements functionality in MySQL
Enterprise Backup (InnoDB Hot Backup) and more
6
Introduction to XtraBackup
Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only
DEB and RPM packages available Generic Linux tarball and source code
Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions
7
There are three main phases Backup
Copies all files needed Apply logs
Performs a crash recovery to leave in a consistent state Copy back
Moves the files to their final destination
Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this
How does it work
8
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull
Full backup - example
9
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed
Compressed backup - example 1
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
XtraBackup
5
Introduction to XtraBackup
XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also
supported Implements functionality in MySQL
Enterprise Backup (InnoDB Hot Backup) and more
6
Introduction to XtraBackup
Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only
DEB and RPM packages available Generic Linux tarball and source code
Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions
7
There are three main phases Backup
Copies all files needed Apply logs
Performs a crash recovery to leave in a consistent state Copy back
Moves the files to their final destination
Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this
How does it work
8
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull
Full backup - example
9
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed
Compressed backup - example 1
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
5
Introduction to XtraBackup
XtraBackup is a free and open source hot backup tool for MySQL Percona Server and MariaDB are also
supported Implements functionality in MySQL
Enterprise Backup (InnoDB Hot Backup) and more
6
Introduction to XtraBackup
Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only
DEB and RPM packages available Generic Linux tarball and source code
Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions
7
There are three main phases Backup
Copies all files needed Apply logs
Performs a crash recovery to leave in a consistent state Copy back
Moves the files to their final destination
Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this
How does it work
8
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull
Full backup - example
9
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed
Compressed backup - example 1
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
6
Introduction to XtraBackup
Supports hot (lockless) backups for InnoDB and XtraDB Locking backups for MyISAM Packaged for Linux operating systems only
DEB and RPM packages available Generic Linux tarball and source code
Main features Incremental and compressed backups Backup locks (as an alternative to FTWRL) Encrypted backups Streaming Ability to export individual tables and partitions
7
There are three main phases Backup
Copies all files needed Apply logs
Performs a crash recovery to leave in a consistent state Copy back
Moves the files to their final destination
Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this
How does it work
8
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull
Full backup - example
9
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed
Compressed backup - example 1
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
7
There are three main phases Backup
Copies all files needed Apply logs
Performs a crash recovery to leave in a consistent state Copy back
Moves the files to their final destination
Enough permissions at OS and MySQL levels required well use root accounts but there is more in-depth documentation on this
How does it work
8
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull
Full backup - example
9
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed
Compressed backup - example 1
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
8
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull
Full backup - example
9
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed
Compressed backup - example 1
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
9
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --target-dir=backupscompressed
Compressed backup - example 1
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
10
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 31G test 32G
shellgt xtrabackup --defaults-file=etcmycnf --backup --compress --stream=xbstream gt backupscompressedbackupxbstream
Compressed backup - example 2
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
11
shellgt cd varlibmysql shellgt du -h --max-depth=1 636K performance_schema 18M mysql 37G test 38G
shellgt xtrabackup --defaults-file=etcmycnf --backup --incremental-basedir=backupsfull --target-dir=backupsincremental
Incremental backup - example
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
12
shellgt xtrabackup --defaults-file=etcmycnf --backup --parallel=8 --compress --compress-threads=8 --stream=xbstream gt backupscompressed_parallel_backupxbstream
Parallel compressed backup - example
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
13
Full took 9 min 20 sec resulting size 32 Gb
Compressed took 7 min 40 sec resulting size 84 Gb (original 32 Gb)
Incremental took 5 min 40 sec resulting size 62 Gb
Parallel + compressed took 3 min 40 sec resulting size 84 Gb (original 32 Gb)
Differences between them
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
14
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
`split` will come handy afterwards when we discuss limitations `gzip` is also very slow since it uses one processor to compress
use pigz instead of gzip
Yet another compressed backup example
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
15
Yet another compressed backup example
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
16
Yet another compressed backup example
gzip
pigz
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
17
Logical backup mysqldump text file with commands to restore Physical backup copied files Main differences restore time
Much faster with binary backup ie xtrabackup
Logical vs Binary Backups
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
Amazon Relational Database Service (RDS)
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
19
What is RDS Aurora
Web Service targeted to easily setup operate scale
Features rapid provisioning scalable resources high availability automatic admin tasks
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
20
AWS Aurora Features
Storage Auto-Scaling (up to 64Tb) Replication
Amazon Aurora replicas share the same underlying volume as the primary instance up to 15 replicas
MySQL based replicas Scalability for reads
can autoscale and add more read replicas High Availability
Amazon Aurora automatically maintains six copies of your data across three Availability Zones (AZs)
Automatically attempt to recover
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
21
Read Scaling with RDS Aurora
RDS MySQL same as MySQL adding MySQL replication slaves
Aurora MySQL Read replicas (aurora specific) - not based on MySQL replication MySQL replication (for cross-datacenter replication)
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
Pros and Cons of Migrating to RDS
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
23
Easy to manage Minor upgrades automatically handled Backups automatically handled Less DBA work Less things to worry about (OS config replication setup etc)
Pros of Migrating to RDS
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
24
RDS Aurora for MySQL
Aurora MySQL additional features Low latency read replicas Load balancer for reads built-in Instant add column faster GIS etc
Aurora MySQL preview Aurora Multi-Master adds the ability to scale out write performance across multiple
Availability Zones Aurora Serverless automatically scales database capacity up and down to match
your application needs Amazon Aurora Parallel Query improves the performance of large analytic queries
by pushing processing down to the Aurora storage layer spreading processing across hundreds of nodes
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
25
Cons of Migrating to RDS
Instance type limits Instances offer up to 32 vCPUs and 244 GiB Memory
Less control over the server More expensive than using EC2 (can be 3x more expensive) Aurora MySQL - single-threaded workload can be much slower
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
Migrating to RDS via XtraBackup
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
27
Announcement
httpsawsamazoncomabout-awswhats-new201711easily-restore-an-amazon-rds-mysql-database-from-your-mysql-backup
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
28
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
IAM account used should have access to S3
General Steps to Migrate
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
29
Take XtraBackup backup from instance Upload to S3 bucket Create new RDS instance using the backup
It is also possible to do 1 and 2 in one step but if it fails you will have to restart all of it
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | aws s3 cp - s3rdsmigrationperconalive18backuptar
General Steps to Migrate
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
30
Use XtraBackup 23 latest patch version if possible
The innobackupex script is deprecated Choose timing wisely
even if it is a hot backup tool it will lock for some time Use the options from the documentation
parallel compressed compress-threads
Taking the Backup
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
31
shellgt xtrabackup --defaults-file=etcmycnf --backup --target-dir=backupsfull2018_04_23 ltoutput trimmedgt 180423 161725 [00] done xtrabackup Transaction log of lsn (181862656397) to (181862656397) was copied 180423 161725 completed OK
Taking the Backup
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
32
shellgt time aws s3 cp backupsfull2018_04_23 s3rdsmigrationperconalive18full2018_04_23 --recursive ltoutput trimmedgt upload backupsfull2018_04_23xtrabackup_checkpoints to s3rdsmigrationperconalive18full2018_04_23xtrabackup_checkpoints upload backupsfull2018_04_23xtrabackup_info to s3rdsmigrationperconalive18full2018_04_23xtrabackup_info upload backupsfull2018_04_23xtrabackup_logfile to s3rdsmigrationperconalive18full2018_04_23xtrabackup_logfile
real 6m15868s
Uploading to S3
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
33
Can also be done via web GUI
Uploading to S3
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
34
Uploading backups to S3
Full 6 min 38 sec 32 Gb
Incremental 1 min 32 sec 62 Gb
Compressed 1 min 50 sec 84 Gb
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
35
Creating the new RDS instance
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
36
Creating the new RDS instance
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
37
Creating the new RDS instance
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
38
Creating the new RDS instance
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
39
Creating the new RDS instance
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
40
Creating the new RDS instance
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
41
Creating the new RDS instance
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
42
Creating the new RDS instance
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
43
Creating the new RDS instance
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
44
Creating the new RDS instance
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
45
Creating the new RDS instance
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
46
Creating the new RDS instance
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
47
Creating the new RDS instance
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
48
Creating the new RDS instance
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
49
Creating the new RDS instance
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
50
Creating the new RDS instance
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
51
Creating the new RDS instance
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
52
Creating the new RDS instance
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
53
Creating the new RDS instance
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
54
Creating the new RDS instance
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
55
Creating the new RDS instance
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
56
How much time will it take to restore
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
57
How much time will it take to restore
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
58
Using an incremental backup
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
59
Using an incremental backup
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
60
Using an incremental backup
S3 folder path is empty because we will use all contents
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
61
Using a compressed backup
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
62
Use a command like the following (seen in slide 13) Upload all generated files to one folder Use that folder as S3 folder path prefix
shellgt xtrabackup --defaults-file=etcmycnf --backup --stream=tar | gzip -c | split -d --bytes=10GB - backupscompressedcompressed_backuptargz
Using a split backup
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
63
Using the aws CLI command
shellgt aws rds restore-db-instance-from-s3
--db-instance-identifier rdsmigrationpl18cli
--db-instance-class dbt2large
--engine mysql
--source-engine mysql
--source-engine-version 5639
--s3-bucket-name rdsmigrationperconalive18
--s3-ingestion-role-arn arnawsiam123456789012userusername
--allocated-storage 100
--master-username rdspl18usercli
--master-user-password rdspl18usercli
--s3-prefix compressed_split_backup
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
Limitations
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
65
Only Perconas XtraBackup is supported it may work with forks but
Source databases should all be contained within the datadir Only MySQL 56 versions are allowed There is a 6 Tb size limit Encryption is only partially supported
only restore to an encrypted RDS instance is allowed source backup cant be encrypted nor the S3 bucket
The S3 bucket has to be in the same region
Limitations
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
66
Importing to a dbt2micro instance class is not supported it can be changed later
S3 limits file size to 5 Tb it can be split into smaller files alphabetical and natural number orders are used
RDS limits the number of files on the S3 bucket to 1M they can be merged with targz
The following are not imported automatically Users Functions Stored Procedures Time zone information
Limitations
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
67
Limitations
Migrating to previous versions is not supported Partial restores are not supported Import is only available for new DB instances No partial backups supported
--databases --tables --databases-file --tables-file Corruption on source server is not detected if any due to being physical
copy
Questions
69
Rate Our Session
Thank You
Questions
69
Rate Our Session
Thank You
69
Rate Our Session
Thank You
Thank You