new database migration services & rds updates
TRANSCRIPT
AWS Deck Template
London
New Database Migration Services& RDS UpdatesDavid Elliott, AWS Solutions Architect
Amazon RDS
Managed DB service in the cloud
Lets recap what RDS is
3
If you host your databases on-premises
youPower, HVAC, netRack & stackServer maintenanceOS patchesDB s/w patchesDatabase backupsScalingHigh availabilityDB s/w installsOS installationApp optimization
4
If you host your databases in Amazon EC2Power, HVAC, netRack & stackServer maintenanceOS installation
OS patchesDB s/w patchesDatabase backupsScalingHigh availabilityDB s/w installsApp optimization
you
5
If you choose Amazon RDSApp optimization
Power, HVAC, netRack & stackServer maintenanceOS patchesDB s/w patchesDatabase backupsHigh availabilityDB s/w installsOS installationScaling
you
All the time thats freed up by offloading undifferentiated labor to AWS can be used to do the app optimizations you always wanted to have time to do.6
Fully Managed Relational Database Service for the Cloud
Choice of multiple engines - MySQL, Aurora, PostgreSQL, Oracle, SQL Server
More than 100,000 active customers including many major enterprisesAmazonRDS
Aurora
Previously weve supported 5 engines
7
RDS feature matrix
FeatureAuroraMySQLPostgreSQLOracleSQL ServerVPCHigh availabilityInstance scalingEncryptionComing soonRead replicasOracle Golden GateCross regionMax storage64 TB6 TB6 TB6 TB4 TBScale storageAuto ScalingProvisioned IOPSNA30,00030,00030,00020,000Largest instanceR3.8XLR3.8XLR3.8XLR3.8XLR3.8XL
FedRamp for RDS Reason for Launch: Requirement for US Federal agencies
RDS BAA InclusionReason for Launch: US regulatory Compliance8
Amazon RDS for MariaDB Available nowAll public AWS regionsSupports MariaDB 10.0.17All current instance classes (T2, M3, R3)EBS storage volumes (up to 6 TB, 30,000 IOPS)Same price as RDS MySQL on-demand and RIs
RDS MariaDB:
https://aws.amazon.com/rds/mariadb
9
Default choice for MySQL in popular Linux distributions
M in LAMP stack is MariaDB instead of MySQL now
Parallel replication, threadpools, multi-source replication features not available with standard mySQLMariaDB is pretty close to a drop-in replacement, but there are some incompatibilities described here: https://mariadb.com/kb/en/mariadb/mariadb-vs-mysql-compatibility/
10
2M+ users in 45 countries including major enterprises
2015 MariaDB Corporation
Both using in production applications as well as contributing to MariaDB
Facebook active contributor have worked with them on RDS MariaDB
Getting started with Amazon RDS for MariaDBInformationhttps://aws.amazon.com/rds/mariadb
Pricinghttps://aws.amazon.com/rds/mariadb /pricing/
MariaDB user guidehttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_MariaDB.html
Designed to be highly available, highly secure, easier to use, and cheaperDesigned to have built-in high availability and cross-region replication across multiple data centersManaged backup and point-in-time recovery Push button provisioning, automated instance and storage scaling, patching, security, restores, and general care and feedingLower TCO because we manage the muckGet more leverage from your teamsFocus on the things that differentiate you
Scaling; changing work profiles at weekends etc.Storage can be scaled as you grow 13
RDS High Availability with multi-AZ deploymentsEnterprise-grade fault tolerance solution for production databases
An Availability Zone is a physically distinct, independent infrastructureYour database is synchronously replicated to another AZ in the same AWS regionFailover occurs automatically in response to the most important failure scenarios
Strongly recommend use of multi-AZ in productionBinlog replication (asynchronous). If you have to fail over theres a chance you may lose data
With multi-AZ we do replication synchronously at the storage layer to the secondary or standbyPrimary and standby are monitored and if the primary is unavailable for whatever reason then RDS automatically switches over the secondaryTypically this is pretty fast e.g. 45-60 seconds
14
Customers love multi-AZ
Very popular feature
Strong growth for multi-AZ when launched (still growing strongly)15
Choose cross-region read replicas for faster disaster recovery and enhanced data localityPromote read-replica to a master for faster recovery in the event of disaster
Bring data close to your customers applications in different regions
Promote to a master for easy migration
Cross-region read replicas are available for Amazon RDS for MySQL and MariaDB
May have customers all over the world bring your data closer to the customerManaged replication
DR faster than starting from scratch when using read replicas
16
Choose cross-region snapshot copy for even greater durability, ease of migrationCopy a database snapshot to a different AWS region
Warm standby for disaster recoveryUse it as a base for migration to a different regionOR
Cross region snapshot copy is available for all RDS engines.
If you dont want a read replica running all the time you can take a snapshot of your DBe.g. nightly for warm standby 17
Amazon RDS is easy to monitor with Amazon CloudWatchCloudWatch RDS MetricsCPU utilizationStorageMemorySwap usageDB connectionsI/O (read and write)Latency (read and write)Throughput (read and write)Replica lagMany moreCloudWatch Alarms
Similar to on-premises custom monitoring tools
#
Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, and set alarms. The system-wide visibility into resource utilization, application performance, and operational health that Amazon CloudWatch provides can help you keep your applications running smoothly.
Can view these via the RDS console18
Advanced monitoring50+ system/OS metrics | sorted process list view | 1-60 sec granularity alarms on specific metrics | egress to CloudWatch Logs | integration with 3rd-party tools
ALARMcoming soon
Customers love RDS but would like it to be more transparentProcess list e.g. from topCurrent granularity of monitoring is around 5 seconds; now anywhere between 1 and 60 secondsCustomisable dashboards; resize, move things around; stream metrics to cloudwatch logsIntegrate with graphite, ganglia etc.. So this will let you pull together monitoring from on-premises, EC2 and RDS19
AWS Database Migration Service
Announcing preview of AWS DMS
Explain what it basically is: A managed hosted data replication service engineered to support graceful migration from legacy database systems into the next generation managed databases at AWS.
20
Embracing the cloud demands a cloud data strategy
How will my on-premises data migrate to the cloud?
How can I make it transparent to my customers?
Afterwards, how will on-premises and cloud data interact?
How can I integrate my data assets within AWS?
Can I get help moving off of commercial databases?
These days, were hearing a lot of customers tell us they want to move their on-premises applications into the cloud.
But moving applications is simpler than moving the databases they depend on.
Applications are usually stateless, and can be moved fairly easily using a lift and shift approach.
(CLICK) But databases are stateful, and they require more care. To move databases to AWS, requires a data migration strategy.
(CLICK) And when it comes to designing those strategies, customers want to be able to do it with the least possible inconvenience and visibility to their users.
(CLICK) And once an application is migrated to AWS, its not the end of the story. Often customers have several applications, some in the cloud and some on premises or in hosted environments.Customers need to be able to synchronize their data between on-premises and cloud-based applications.
(CLICK) And the same goes for applications within AWS. Those applications often share data, and customers want to be able to synchronize and replicate data between the various databases they maintain within AWS.
(CLICK) And one other thing: customers moving applications to the cloud, often see it as an opportunity to break free from commercial databases, which tend to have a heavy licensing burden. We often hear customers asking us for a way to convert their commercial databases into AWS solutions, such as RDS MySQL, Postgres, Aurora and Redshift.21
Historically, Migration = Cost, TimeCommercial Migration / Replication software
Complex to setup and manage
Legacy schema objects, PL/SQL or T-SQL code
Application downtime
22
Start your first migration in 10 minutes or lessKeep your apps running during the migrationReplicate within, to or from Amazon EC2 or RDSMove data to the same or different database engine Sign up for preview at aws.amazon.com/dmsAWSDatabase Migration Service
* Like all AWS services, it is easy and straightforward to get started. You can get started with your first migration task in 10 min or less.You simply connect it to your source and target databases, and it copies the data over, and begins replicating changes from source to target.*That means that you can keep your apps running during the migration, then switch over at a time that is convenient for your business.* In addition to one-time database migration, you can also use DMS for ongoing data replication. Replicate within, to or from AWS EC2 or RDS databasesFor instance, After migrating your database, use the AWS Database Migration Service to replicate data into your Redshift data warehouses, cross-region to other RDS instances, or back to on-premises*Again- it is heterogeneous ~. With DMS, you can move data between engines. Supports Oracle, Microsoft SQL Server, MySQL, PostgreSQL, MariaDB, Amazon Aurora, Amazon Redshift
23
10 minutes or less to migration
Lets take a look at how to use the database migr. ServiceThat will take you to page that describes how DMS works to migrate your data; how you connect it to a source database and target database, then define replication tasks to move the data.
Type in a name for the replication instanceSelect instance type (m3.large)Choose VPC
Define source and target connection detailsDefine a name and type of engineUse connection string for on-premises DB (connect back via VPN / DX)
Pre-populates RDS DBs
Then create a task; this is a unit of work. It picks up data and replicates it acrossChoose method; load and unload
Can monitor progress24
CustomerPremises
Application Users
AWS
Internet
VPNStart a replication instance
Connect to source and target databases
Select tables, schemas, or databasesLet AWS Database Migration Service create tables, load data, and keep them in sync
Switch applications over to the target at your convenienceKeep your apps running during the migrationAWSDatabase Migration Service
How does it obtain change? Calls change capture API e.g. for Oracle logminer; MySQL: bin log API; PostgresSQL: logical decoding using replication slots
(CLICK) Start by spinning up a DMS instance in your AWS environment(CLICK) Next, from within DMS, connect to both your source and target databases(CLICK) Choose what data you want to migrate. DMS lets you migrate tables, schemas, or whole databases(CLICK) It creates the tables, loads the data, and best of all, keeps them synchronized for as long as you need
That replication capability, which keeps the source and target data in sync, allows customers to switch applications (CLICK) over to point to the AWS database at their leisure.DMS eliminates the need for high-stakes extended outages to migrate production data into the cloud. DMS provides a graceful switchover capability.25
After migration, use for replication and data integration
Replicate data in on-premises databases to AWS
Replicate OLTP data to Amazon Redshift
Integrate tables from third-party software into your reporting or core OLTP systems
Hybrid cloud is a stepping stone in migration to AWS
But DMS is for much more than just migration.
(CLICK) DMS enables customers to adopt a hybrid approach to the cloud, maintaining some applications on premises, and others within AWS.
There are dozens of compelling use cases for a hybrid cloud approach using DMS.
(CLICK) for customers just getting their feet wet, AWS is a great place to keep up-to-date read-only copies of on-premises data for reporting purposes.AWS services like Aurora, Redshift and RDS are great platforms for this.
(CLICK) With DMS, you can maintain copies of critical business data from third-party or ERP applications, like employee data from Peoplesoft, or financial data from Oracle E-Business Suite, in the databases used by the other applications in your enterprise. In this way, it enables application integration in the enterprise.
(CLICK) Another nice thing about the hybrid cloud approach is that it lets customers become familiar with AWS technology and services gradually.DMS enables that. Moving to the cloud is much simpler if you have a way to link the data and applications that have moved to AWS with those that havent.26
Cost-effective and no upfront costs
T2 pricing starts at $0.018 per Hour for T2.microC4 pricing starts at $0.154 per Hour for C4.large
50GB GP2 storage included with T2 instances100GB GP2 storage included with C4 instances Data transfer inbound and within AZ is free
Data transfer across AZs starts at $0.01 per GB
Swap,Logs,Cache
With the AWS Database Migration Service you pay for the migration instance that moves your data from your source database to your target database.(CLICK) (Actually talk to points) Each database migration instance includes storage sufficient to support the needs of the replication engine, such as swap space, logs, and cache. (CLICK) (actually talk to points) Inbound data transfer is free. (CLICK) Additional charges only apply (CLICK) if you decide to allocate additional storage for data migration logs or when you replicate your data to a database in another region or on-premises.
AWS Database Migration Service currently supports the T2 and C4 instance classes. T2 instances are low-cost standard instances designed to provide a baseline level of CPU performance with the ability to burst above the baseline. They are suitable for developing, configuring and testing your database migration process, and for periodic data migration tasks that can benefit from the CPU burst capability. C4 instances are designed to deliver the highest level of processor performance and achieve significantly higher packet per second (PPS) performance, lower network jitter, and lower network latency. You should use C4 instances if you are migrating large databases and are looking to minimize the migration time.
27
Migrate and replicate between database engines
Elaborate on heterogeneous use cases
Database engine migration cost savings; Move to full managed and scalable cloud-native Ent class like Aurora
Low-cost reporting, analytics and BI for systems on commercial OLTP (MySQL Postgres Aurora)
Data integration customer accounts, data like that, can be presented no only on the master platform, but also in applications that are based on non-commercial
But you cant just pick up an Oracle table and put it down in MySQL. You cant run an Oracle PL/SQL package on Postgres. To migrate or replicate data between engines, you need a way to convert the schema, to build a set of tables and objects on the destination that is native to that engine.Weve been working on that problem. 28
AWS Schema Conversion Tool
Migrate off Oracle and SQL Server
Move your tables, views, stored procedures and DML to MySQL, MariaDB, and Amazon Aurora
Know exactly where manual edits are needed
AWSSchema Conversion Tool
The AWS Schema Conversion Tool is a development environment (code browser) that you download to your desktop
Clients for Windows, Mac, 2 flavours of Linux
You can convert database objects such as tables, indexes, views, stored procedures, and Data Manipulation Language (DML) statements like SELECT, INSERT, DELETE, UPDATE.30
Get help with converting tables, views, and code
SchemasTablesIndexesViewsPackagesStored ProceduresFunctionsTriggers
SequencesUser Defined TypesSynonyms
PostgreSQL support coming soon31
Move your database schema and code
MySQL workbenchSwitch to schema conversion tool; analyses metadata. shows me all schemas I have including sales applicationData cleansing stored procedureReview Aurora; emptyrun the tool. Same number of objects (tables, views, functions). New function is longer but has same function as previouslygenerate new objects, then view successful migration in AuroraIncludes an extension schema to simplify migrations from OracleCan see other DB objects that have been generated
32
Know exactly where manual edits are needed
Tool wont convert all of your source code / functionalitySource code browser lets you view recommended actions based on analysis of source code
In this case put_line function of Oracle DBMS output. MySQL doesnt have this so youd have to complete some code for the extension package to output log records33
Database migration partners
aws.amazon.com/partners
UK specific partners? Not sure34
Sign Up for AWS Database Migration ServiceSign up for AWS Database Migration Service Preview now:aws.amazon.com/dms
Download the AWS Schema Conversion Tool:aws.amazon.com/dms