Background
The Inventables Marketplace and all the technology needed to support it reside and operate in the cloud. Specifically, we use the Amazon Web Services (AWS) offering, and we’re really happy with it.
Our site is a Ruby on Rails application that runs on two Amazon Elastic Compute Cloud (EC2) instances. All of our static assets, such as images and videos, are stored on the Amazon Simple Storage Service (S3).
Until this past weekend, one of the EC2 instances, in addition to being an application server, also formed our database tier. It ran a local MySQL installation. All data was stored on a locally mounted Amazon EBS volume. A cron job would run twice daily to take consistent snapshots of the volume and store them on S3.
This architecture performed beautifully. It had no trouble handling our current load volumes and never went down. But it did have a few limitations:
Hard to grow
Because of its nature as an EBS volume, our database was constrained to a static size. Allocating too large a volume is wasteful and expensive. Expanding to a larger volume would require a number of steps: bringing the site down; snapshotting the database; creating a new, larger volume from the snapshot; and finally re-attaching and re-mounting the new volume to the instance. Because of the tools offered by AWS, including the AWS Management Console, each of these steps would be relatively easy, but they are still required and there are a number of them. Yes, you could probably script them, but there is still a decent amount of heavy lifting required to expand to a larger EBS volume.
Hard to Scale
First, because it ran locally, our MySQL instance was constrained to the compute capacity of the EC2 instance hosting it. Expanding its capacity would require terminating that instance, starting up a new one, and reconfiguring our Elastic IPs.
Second, the MySQL instance was competing for resources with the application server running on the same box.
Hard to maintain
Any upgrades or patches required for the MySQL server would have to be installed manually by our development team, meaning time spent maintaining a database and not building new features.
Hard to recover from failure
Recovering from a failure to the database or its data volume would require a similar sequence of steps as listed above for expanding the volume and/or increasing the compute capacity.
Enter RDS
Amazon RDS provides an elegant solution to all of the above issues. It is a standalone service designed, primarily, to solve the first two problems on our list: growing and scaling. Resizing it and scaling up its compute resources are each completed with a single API call and minimal downtime. In addition to this, Amazon administers the underlying MySQL infrastructure for you, freeing you of any DBA responsibilities such as patch updates or backups. Restoring from a backup is, again, one command away.
How we moved
Our first step was to perform a dry run of the migration by launching a new, standalone EC2 instance based on the same AMI running our primary app/DB server instance. (With the AWS console, creating a new instance is only one click away.) This allowed us to know we were testing a system that behaved exactly like our production system, while not interfering with our production system. We pushed the latest version of our code to the new instance, and verified that we could hit the app through a browser.
Next, as ruby developers, we decided to use a ruby library to interface with the RDS API. We installed the amazon-ec2 gem locally in our development environment and configured it for use like so:
Once we had an RDS object, we were ready to create a brand new RDS database instance with plenty of space and in the same region as our two EC2 instances:
With the DB instance created, the last remaining piece was to configure its security group to allow network ingress from our EC2 security group:
Our RDS database instance setup was now complete. The last major piece of the dry run was to migrate our data from the local MySQL database to the RDS database and verify that the application was still running. Since our database was under 2 GB, we decided to use the mysqldump utility to create a flat file containing our schema and data, and the mysql tool to import this file into our new DB. The entire process only took a few minutes:
- mysqldump old_db -u user -p > inventables.sql
- mysql rds_db -u user -p -h [RDS_connection_string] < inventables.sql
With the import complete, we modified our config/database.yml on the test machine to use the new RDS connection string, stopped our local MySQL server, restarted Apache, and verified that the application was still running via a web browser and the rails console. Success!
Production Migration
With the dry run out of the way, converting the live system was a piece of cake. We put up a maintenance page, turned off all cron jobs (which also interact with our database), then repeated the dump/import and database.yml changes in the production environment. Instead of testing the new DB connection via a browser, however, we tested using only the rails console. We also made sure to test a fresh deployment with and without migrations.
Conclusion
The process of moving from a local MySQL database to RDS was quite painless and only took a few hours. We now have a much more robust architecture that will allow us to more easily respond to the growth of our website.
—Written by Jeff