Get In Touch

How we do version control for our database

August 4th, 2017

Our stack consists of a platform and many apps running on top of it with a lot of engineers working on these various apps.

Even though we predominantly run on the cloud, we also have some customers running our product on-premise and on private clouds, and so that means we don’t have the luxury of a single version of our apps deployed to all customers. Furthermore, different customers may choose to use different apps so not all apps are installed for each customer.

We use Mercurial for source control and that works pretty well, but one of the thorny issues was around keeping our database structures versioned as well.

Our requirements for keeping our database in version control were:

How it works

Each application maintains a data dictionary – an xml file that defines the database structures used by the application.

This file contains tables, stored procedures, triggers, views, functions and indexes.

Here’s an example of a table defined in the data dictionary:

<Table name='UserMaster'>
  <Columns>
    <Column name='UserKey' type='pk' />
    <Column name='LoginID' type='varchar(255)' nullable='false' />
    <Column name='Email' type='varchar(1024)' />
    <Column name='FirstName' type='varchar(255)' nullable='false' />
    <Column name='LastName' type='varchar(255)' nullable='false' />
    <Column name="FullName" type="varchar(510)" nullable="false" iscomputed="1" expression="((isnull([FirstName],'')+' ')+isnull([LastName],''))" />
  </Columns>
  <Indexes>
    <Index columns="LoginID" type="NONCLUSTERED" name="IX_UserMasterLoginID" />
  </Indexes>
</Table>

The table’s entire definition is encapsulated as a structured block.

Maintaining a version history is pretty straightforward.

So is running a diff and merge on this when doing updates.

For more complex database objects like functions and procedures – that contain complex code, we just dump the code into a CDATA block in the xml.

<Function name="DateOnly"><![CDATA[
 create  function dbo.DateOnly(@DateTime DateTime)
 returns datetime
 as
     begin
     return dateadd(dd,0, datediff(dd,0,@DateTime))
     end

]]>
</Function>

These are still easy to maintain in version control.

When an application undergoes enhancements and bug fixes that require database changes, this data dictionary gets modified along with the application’s modifications and get commited together.

Merging the structures from files into the database

So we have this data dictionary on disk in an easily parseable format. How do we transfer this to the live database when we get an updated version?

Generating the needed sql to populate into the database is pretty straightforward if your database is empty and your putting in these structures for the first time. However, if you already have a live system, then a naive drop-and-create operation on tables is a non-starter.

So instead, what happens is, we read the data dictionary, then query the database’s metadata to determine the corresponding table structures in the database, and then use that to generate alter statements that will bring the database in sync with the structure in the dictionary.

Handling data type changes

Certain types of structure changes are easy to handle – adding and removing columns or indexes or entire objects. Other types of changes – like changing a column’s type from integer to float, or making a non-nullable column nullable or vice versa are more tricky to handle. We try to be intelligent about this based on our common use-cases and generate the required sql to handle it but if in doubt, we fail loudly and let someone manually handle the change.

Handling execution order

Execution order was another annoying problem to solve. Many database objects rely on each other and so when you generate the sql required to update the table structures, you need to make sure they are sorted such that you are not creating objects that depend on other objects which have not yet been created. Otherwise you start getting errors when merging into the database.

To solve this, we construct a directed graph out of the database objects, with all the objects as nodes and dependencies described as vertices. When then apply a topological sort to the graph to determine the order in which the alter scripts should be executed.

Constructing a dependency graph from database objects

The next challenge was in actually constructing the graph of dependencies between objects. One easy solution would have been to simply require developers to define those dependencies in the data dictionary. That seemed like a half-assed solution.

Anything extra work that doesn’t meaningfully do something for the developer is not going to fly. Asking them to add dependencies to satisfy a tool means it will eventually stop happening and things start getting sloppy. Instead what we would like to do is just extract it from the objects themselves.

Our first attempt was a “proper” solution – we wrote a recursive descent parser for SQL by looking examining the grammar definition in documentation. Once we had a full parser, we were able to reliably extract dependencies from a given piece of sql code.

While this would have been a proper solution, we ran into a practical issue:

Our parser didn’t parse the full T-SQL language – only the subset that we happened to use. That worked ok -until the moment developers started using newer features of the language that our parser couldn’t handle. Developers would then have to wait until the parser was updated to handle it – and this was a bottleneck because we didn’t have the luxury of a full-time developer working on maintaining this tooling.

We then played around with a dumb solution – instead of parsing the code, we simply extract out the tokens and see if any of the tokens match any existing object names. If so, then mark it as dependency.

This is an extremely naive approach but it turns out to work surprisingly well. We have yet to run into any dependency related bug from this approach.

At some point in the future, we may decide to use Sql Server’s parsing library that they ship – this gives us a more robust solution – but for now – this works fine.

The lesson as always: Worse is Better

Putting it all together

All of this is wrapped up in a tool that provides a user interface to the engineer performing the merge operation on the database.

It pops up a list of objects that are going to be modified and lets you inspect the actual script that is going to run to perform the modification.

Any potentially destructive item is highlighted in red and kept disabled by default – so it needs to be specifically enabled after inspection. Destructive items are defined as any dropped columns or changed data types. Changes in stored procedures, views, functions are not considered destructive.

Once you have reviewed everything, hit a button to apply the change and then you’re done.

Loose Ends

There are some things that this procedure doesn’t handle very well. One is data migration. Any structure change that requires a complex data migration doesn’t play well with this system.

For now ,we use release notes to specify migration scripts that need to be run before and after the database merges are done.

On some customer sites with tight authentication implemented at the database level, this tool doesn’t have the permissions required to query the metadata tables that contain the structures of objects, which means we replay these changes on an identical setup elswhere and then produce the final scripts and send them to the customer for deployment.

Other solutions

There are other ways to do this.

The ‘Event Sourcing’ method

One suggested method is to simply use a long series of sql statements – starting at your original table structures, and then adding alter scripts sequentially – so your database file is essentially an initial table structure and a long series of alterations to it. To get the latest structure -you need to start at the beginning and create the original structure and then keep applying changes until it evolves into your current state.

The nice thing about this is that you can embed your migration scripts also into this sequence.

The downside is that you end up with large files that need to be executed whenever you’re provisioning a new database – (and in our case – every customer gets their own database so this is everytime we setup a new trial)

Liquibase

Liquibase is a tool that does something very similar to what we do. This didn’t exist (or we were not aware of it) when we started building our tool so we never considered this. It seems a little too heavy for our taste though. It also requires a more complex format of storing change sets along with base structures.

Have any suggestions? Send us some feed back