Shipping Database Changes

T-SQL Tuesday is a monthly event where SQL Server bloggers write a post based on a subject chosen by the month’s host. This month’s #tsql2sday event is being hosted by James Anderson (b|t) and this months topic is: Shipping Database Changes.

This month’s #tsql2sday topic couldn’t have come at a better time.  We’re once again faced with some of the biggest changes in recent memory to hundreds of client databases (many that we host, but some that we don’t), so I’m anxious to see what others are doing.  But if I’m going to take, it only feels right to try and give something back.

I’ve been at EnergyCAP for almost 13 years.  The database and application was started ~18 years ago by a team at Enron (it’s a long story…). They worked to transition the product from Sybase to SQL Server.  And although the product was never used by a paying Enron customer, in three years of development they went from version zero of the database to version 39 when our CEO re-acquired the IP.

From then until now, we’ve tried a lot, learned a lot and still have plenty of things we “wish” we could do better.  (insert your favorite Animaniac gazing wide-eyed at the “town of Continuous Integration” with little hearts bubbling in the air…).

Anyway, I’ll compare our journey to the basic stages of primary education.  Somehow, that seems a fitting comparison as I think about where we’ve been, what we’ve learned along the way, and where we anxiously hope to be in years to come.

Preschool Bliss

The initial process was, honestly, pretty simple.  For two reasons.  There weren’t many clients to support yet (most of which never touched the database outside of the application) and our expectations were pretty low.  The process looked like this (generally):

Step 1: Change model in Erwin based on developer needs.
Step 2: Export the “create new” database script
Step 3: Run a myriad of batch files that would take the new script, a source database for “core-pop” data, and some other mojo, resulting in an “upgrade” script that would:

  • Rename all of the existing tables to ‘TablenameOld’
  • Recreate all tables (including any new ones)
  • Select/Insert all data from old table to new table (turn off identities, etc.)
  • Add constraints, indexes, triggers and SPROCs.
  • Delete old tables
  • Verify any “core-pop” data (defined properties/data that our app requires) and insert/update as appropriate
  • Set appropriate permissions

Step 4: Take “upgrade” script and run it on client databases.
Step 5: Pray… Wait! That was supposed to be step 4!!

This was not a transactional script… of course who would want the entire database to be whacked and recreated in a transaction… but I digress.  Our only defense was to make sure we had a backup (or our self-hosted clients did) before the script ran.  Once we were past statement #1, all bets were off.

And we felt the pain of that.  We didn’t upgrade often because it took a long time AND correcting data when things bombed was cumbersome.  There were MANY times we had client call in need of immediate assistance because the script failed and their backup wasn’t good.  Fun times and all of that.  From 2004 (when I showed up) until 2010 there were 7 database modifications, slightly more than one a year.

The Grade School Days

At some point in 2010 we finally said “enough is enough” (no, really, I’m pretty sure that’s exactly how the conversation went after a long night of database upgrades recreations).

Although I don’t believe we had used RedGate SQL Compare yet (was it even around in 2010??), our DBA took the challenge to move away from the “recreate as upgrade” process and to start doing a better job tracking what had actually changed.  Because he’s wicked smart with this SQL stuff, he took the first set of scripts and modified them to take some hand coded upgrade parts of his, fetch and recreate the SPROCs, mash in the triggers that were exported from the modeling software, and eventually into a more traditional “upgrade” script.  I can still remember the first time I ran an actual “upgrade” script on our largest database… and promptly called him in fits of joy because it only took ~10 minutes. (again, insert googly-eyed, heart spewing, Animaniac here…)

It was good progress.

Some things we learned along the way that improved our ability to scale to hundreds of databases with more confidence, included

  • Making sure the beginning of the scripts actually verified they were running against the correct version of the database.  Did you know it turns out not all clients read instructions? No… seriously…
  • Finally taking this “upgrade” approach allowed us to decouple app releases from database version updates.  If we needed to make incremental changes to the database as we worked on the new release (we had long release cycles at the time) we could create incremental upgrade scripts that were easily rolled up into one master upgrade when a new version of the product shipped.
  • At some point in here, we also introduced better error checking and transactional changes.  Since the changes were usually smaller in scale, we could gain some control over “getting out safely” when things failed. It didn’t always work, but it was a start.

The Middle School Days (Current Status)

Starting around 2012 we investigated using RedGate SQL Compare to help with this process.  Since we generally script out the initial implementation of the changes before updating the model, it was easy to make a new database with our “create new” script, apply those changes, and then do the compare.  In doing so, a lot of the error checking we had introduced previously was a standard part of the scripts the tool created.  As I’ve said before, this wasn’t a perfect solution, but it got us another step further in the processes.

In general, the biggest issues we hit continue to be client customizations to the database (even ones we sanction) and an ever growing set of core-pop data that we manage and have to proactively defend against client changes.  This is an area we just recently admitted we need to take a long, hard look at and figure out a new paradigm.

I should mention that it was also about this time that we were finally able to proactively get our incremental changes into source  control.  All of our final scripts were in source somewhere, but the ability to use SQL Compare and SQL Source Control allowed our developers to finally be a second set of eyes on the upgrade process.  No longer were we weeding through 50K lines of SQL upgrade just to try and find what changed.  Diffing whole scripts doesn’t really provide any good context… especially when we couldn’t guarantee that the actions in the script were in the same order from release to release.  This has been another huge win for us.

These two products have been invaluable to us.  Even for cases when a client messes up an upgrade and we’re stuck with a partial setup, using SQL Compare has saved our bacon many times.

Looking Towards High School

In all, I’d say that we’ve been in a protracted time of middle school.  Being small and resource constrained, it took us longer to get through SQL pre-algebra and composition classes than some.  We’ve had our fits of wishing we were up walking the halls with the big kids, doing automated database deployments or including more of the development staff in the process for backup.

In fact, right now we’re about to make some of the most drastic changes to our schema and process in all of the time I’ve been here and it’s given us another chance to talk about our dreams and desires for the process.  We’ve even included our small (but growing) DevOps team into more of the discussions, looking for ways we can help each other.

What about you?

What resonates with you about our journey?  Are you a little further along and have tips?  Please… share!

In the meantime, I can’t wait to read all of the other posts on this topic to see what we might be able to glean.

1 thought on “Shipping Database Changes”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.