Note: I have updated the presentation since first giving it - check out the new one. The new post also includes tips for converting an existing software project to CD.
My thinking on CD has advanced since last year, but the essentials remain the same. To do CD is to make a strategic decision to remove fear from the deployment process; to treat your test suite as an asset of the highest value; to truly value user feedback; to remove deployment as an obstacle to any other activity.
What has changed? Over the last year, I've done CD on one project and worked on another using a fortnightly release schedule. I've been able to compare the two and observe first hand just how beneficial CD can be.
Continuous Deployment - Going Fast With Confidence
On the CD project, the complete lack of effort required to deploy changes has been a huge timesaver. I have never felt the need to wait before deploying one change, even if I was about to work on another. Little fixes, in other words, made it to production very quickly, pleasing my customers far more than assurances of "it'll be fixed next week" would have.
I think the best moment was when I noticed a user trying and failing to complete a wizard due to a bug. I fixed the bug and deployed - allowing them, on their sixth try (and probably to their complete surprise), to complete it. If ever there was a moment where I appreciated the value of a robust, quick deployment process, this was it.
Furthermore, this experience highlights one of the key benefits of CD. I could have hacked a fix on production - but it was easier to use the CD process, which included a full test suite run. There's simply no way any other process could have provided the same speed with the same level of assurance - hacking on production would have been the only faster option, and it would have been wildly dangerous. 
Fortnightly Deployment - The Lie
On the fortnightly deploy project on the other hand, we encountered all the same issues that I'm so tired of.
We'd do a release, then for the next two weeks, some fixes would be marked as so urgent that we had to do a deployment of just that fix, immediately. We'd made sure deployment was as close to a one-command process as possible. However, the process of patching and testing the stable branch was an annoying break in rhythm, given that we were doing most development on trunk .
This was actually a point raised by Andy Chilton at the talk today. It seems that many project teams realise that there are some fixes that just have to make it out fast, and as a result they build a separate "hotpatch" channel to accomodate them.
In my view this is madness, no matter how well tended the "hotpatch" process is. Do your hotpatches go through the test suite? They certainly should! And why create a "fast path", and then forbid its use in ways that would delight your customers?
But I think my biggest objection is this: why have two processes when you could just have one? We coders know the evil that lies in needless duplication and complexity - which is exactly what a "hotpatch" system is. Duplication and complexity.
The whole idea of having a separate deployment process exposes the "fortnightly" claim as a lie anyway. Who can honestly claim they deploy every fortnight, if they're hotpatching? 
Objections to CD
Perhaps the strongest objection that came up was that clients wouldn't tolerate the possibility of things breaking without them being aware of it. To me, this objection has a slight air of childishness about it - I'd give it more credit if clients ever bothered to hire a world-class QA team, but they never do, and they miss bugs slipping into production all the time even with their checking. I think there's just our old friend, the "Cover Your Ass" policy, at work here.
Besides, nothing about CD precludes the possibility that they can still have a QA team checking things - with the able assistance of feature flags that limit features under development to just them. And I'd contend that the QA team would be just as delighted as the client themselves when told a bug they found half an hour ago was not only fixed on production and ready for them to check again, but that a test had been written to make sure it never happens again.
Having said all of this, Brenda Wallace made the point that it all depends on the client, regardless of how good the idea sounds. Some simply won't change from what they know, and at the end of the day it's their project. Perhaps this is why CD is doing so well in the tech startup world - it's the startups themselves who are the clients .
Try It For Yourself - I'll Help
All up, it was a great discussion, and it seemed like many there could at least see how CD could be better. If you count yourself among their number, I encourage you to try it out on the next project you do, and see how you go. I'm more than happy to chat with you about it and share experiences if you do, so feel free to contact me if you want to discuss anything about it.
As an aside, I do intend to continue my Web App Performance series, I've just been focused on other things recently. Apart from business, I've joined the Standby Task Force and am developing scripts to automatically deploy an Ushahidi within a few minutes of a disaster occuring. More on that in a future post.
|||I'm the first to admit that this particular example was rather fortuitous, but I think it's even more relevant as your site gets busier. You'll see the errors occuring, diagnose and fix the problem, deploy - and it's inevitable that some customers will then begin to succeed at what they were doing. Contrast with hotpatching, where you could break the site for more people - or a slower deployment process where more people would encounter the problem.|
|||The CD example (the wizard fix) just goes to show how artificial this problem is. We were pushing back because our process made it harder than it should have been. Software development teams around the world do this all the time - lowering customer expectations about how long it takes to fix problems. I think we're doing our clients a disservice.|
|||Substitute "weekly", "monthly" etc. as appropriate. If you tell me you deploy weekly, I bet you do more than 52 deployments in a year.|
|||"Client" is defined here as "the organisation that uses the project for their benefit". For example, Fairfax uses Catalyst IT to develop stuff.co.nz. Fairfax is the client. In a tech startup, it's the startup themselves that gets the benefit from the project, so they're their own client.|
Want to share this post? Tweet