If you are familiar with Ruby on Rails you know what Migrations are. The same thing can and should be done in Java, it’s just not that well known.
Why migrations? Because it enables you to automatically update any environment you have to the latest version. And this is done through source control closely tied to the code. This means that every time a developer checks in a change to the database it gets propagated to all the developer sandboxes. The person responsible for deploying into the test environment doesn’t have to know which changes are to be applied or not. This is an automatic process that has been tested by CI already. But more on this later.
Simple migration
For those of you that don’t know what migrations are, a migration is a change set. An example would be a change that adds a column to a table. Writing a migration is then writing eirther code or SQL that represents an alter table add column statement. You do not update the original create table statement. For RoR this means some Ruby code, and with the current tools in Java it means writing SQL. I’m not quite sure which one of them I actually prefer, but when making changes to tables with millions of rows I prefer to have as much control of the SQL as possible. The rest of this entry will be focused on the Java way with writing SQL.
So when I wan’t to make a change to the database I would create a new file calles something like 0325_add_comment_column_to_user.sql :
ALTER TABLE PERSON ADD COLUMN COMMENT VARCHAR2(255);
You might notice that this is not portable. VARCHAR2 is an Oracle specific thing. How often do you change database implementations? Not very often, and your code is still vendor independent since you’re using iBatis or Hibernate. Also, during a change of databases you need to port the schema, which will then also include this change because you will port the base line.
Base line
Base line you say? For these migrations to work you will need something that is already there. You obviously can’t do a alter table without a create table first. The create table migration is a bit back in time, but it has been done. So if everything that is done in your database is migrations in separate files you will get a lot of scripts after some time. To prevent this; at regular intervals generate a new script containing the schema that represents the current state of your database. This is from production, or some “production like” environment. Then you can delete all the scripts that lead to that base line, and keep adding new scripts.
This also makes it easier to create new instances of your database. You first run the schema from the base line. In this base line it is included everything up until revision 307. So when you have done this, you need to apply the latest migrations that are missing up until 325 which we created above.
The harder migrations
One important concept with migrations is that you can’t just think about the schema. You will also need to take into consideration the data in tables. This wasn’t really that hard when adding a column that can be null. If we were adding a column that had a ‘not null’ constraint we would need to figure out what to fill in. Maybe this was a default, or maybe you’ would have to calculate the new values. To achieve this you must write PL/SQL or something similar, which would scare a lot of developers off. I personally think that we should all know enough database stuff to be able to do this. It is not just useful in creating migrations, but for gaining a better understanding of databases and how we can use them effectively. I wish I could show you a harder migration here, but I actually don’t have anything to experiment on at hand, so I just hope you get the picture.
The tools
We are using a simple tool called DBDeploy. It’s a very simple tool that requires a table in the database. This table holds which revision has already been applied, and it uses this information to find out which numbered scripts on the disk is to be applied. In the case of the base line above that would be scripts 308 through 325. It does no processing of the scripts, and just puts it together in a new file with some updates to the changelog table in between. This means that you can pretty much write whatever you want inside these scripts. The test to see if it is working is by running the resulting file. Because DBDeploy actually doesn’t execute anything, we have made a simple ant integration with sqlplus for executing the statements. We did use to call the sql task in Ant, but found that we needed some of the flexibility that sqlplus did offer us to be able to perform these migrations in a good way.
We are also using Maven to package and release these scripts in the same manner as we release code. That way we can always go back to see what DB state equals the code, and we can pull down official releases from the Maven repository. It also integrates nicely into a Continious Integration scheme. Every time someone checks in a change it is automatically tested against a separate database. This way we catch most errors instantly and can fix them while we still remember what we were doing.
Some special cases
Simple is not easy. So there are some things that you need to be aware of, and some things that are not really handled that well. I do however belive that it is worth it regardless. Some of the things we have discovered:
- You can not change a migration after it is checked in to source control – The script has already been applied to a database and will not be run again no matter how many changes you do to the script after first check in. There are some exceptions to this, but the basic rule is that it can not be changed if the first script was succesful. You might need to write an alter table statement for the alter table you just did.
- Branches – Since the scripts are number based you there will be a problem when merging a branch into your trunk. What if both trunk and branch has a number 62, but with different contents? First of all, DBDeploy will fail because it finds two scripts with the same number. So you’ll need to rename the script that you just merged in with a different number. But you can’t give it the last number, since some of the other scripts without conflicts from the branch might be dependent on the one with a conflict. So you should always make sure the change in the branch uses a number higher than the one in trunk, and merge that as fast as possible down into trunk so anyone else doesn’t use that number. Not very good, I admit, but it’s the best solution so far.
- Synonyms – Synonyms will be dependent on the schema names. This is not something you must do with DBDeploy, but we prefer to keep our scripts independent on user names. This enables us to have several instances for different purposes in the same environment. We have actually added a Ant search and replace to enable us to write scripts with synonyms, but replace with the correct db schema at runtime.
The stuff I haven mentioned here is very much still a work in progress, and my goal is that we will be able to change our database as we need instead of putting it off until it becomes a separate migration/rewrite project. We need to be able to make improvements with as little hassle as possible so out systems won’t rot away.