Last update: 02/06/2016 13:53
A taste of how we get things deployed at Lato Sensu Management and Vae Soli!!
This article won't dive into all the details of getting things deployed. We are about to explain how WE get things brought to prod, we, at Lato Sensu, when we release Vae Soli!, our web framework.
When we decide to launch a new release of Vae Soli! it's because we think it has value and it is safe enough for prod (we have tested it). This is a conscious decision that does not take a lot of arguments: we always now what is new (we have an exhaustive list of changes that is automatically extracted from our code base, for what purpose it was developed, and we have a quality indicator for all the methods/functions/routines we have developed.
Because we use our own code on a number of websites, we know it's working software BUT (because we have tried it in a number of close-to-real situations). Maybe it hasn't be tested to the fullest (this highly depends on the number of test cases we have created along with the code that gets built). That's OK! After all, when we build an increment the entire code base is parsed by our own tool, The Vae Soli! Documentor, in order to rebuild the entire doc set and during this parsing, automated tests get triggered and their result is included in the doc. All these tests are within our code (the truth is ALWAYS in the code), in comments. With time they have become more numerous than they were when we started: there are about 17000 at the moment we write these lines  .
These tests get executed on top of "in context" testing (the tests we run when we develop on a prod-like environment).
Moreover, we seize the opportunity to run quick performance tests entirely
automated, always on the same environment and always in the exact same
conditions. This permits easy trend analysis between 2 releases. These tests get
automated thanks to
WGET calls that are triggered as background
tasks. When the performance testing stuff comes to an end its result is compared
to a threshold value (within an acceptable range). If this fits the range the
whole thing is packed (whatever that means): for us it means that the code base
is zipped with a version tag and gets promoted to production via a simple FTP
command (once again … automated) (it gets in prod … but prod is not yet
In prod there is a Live agent that watches new deployments. It unpacks the zip and deploys it in a 3-bullet barrel mode (actually 4, but it would divert us from our main course to explain that now): version n-1, version n, version n+1. As all sites point to our framework thanks to a symbolic link (actually a double redirection) we simply update the link to now point to the newest version of our ramework and we're done.
The whole process of doc generation, automated tests, performance testing, and packaging takes not more than 5 minutes from the moment we start it to launch to prod (yet ... it's not activated at that stage!)
If we discover that something goes wrong, we update the symbolic link again to point back to the previous version. For us, "Restore Service" of Incident Management takes no more than 1 minute from the moment we discovered the problem.
Now that we have seen how we use to build and deploy let's see now how it maps to the generic schema we have published under the label of our own Agile methods, L(i)VID and SAMBA.
The code is all stored in a repository of code, a sort of version control system. In our case, our repository is simply a network folder. Each time we produce a new version the whole code base is zipped so we can't be confused between versions. As we never branch code (flat version line vs. fishbone or tree versioning scheme) we always start from the very latest version (the one in our network folder - consider this to be the trunk).
So in our case the "version control thing" is … a simple folder (but we recognize the value of much more sophisticated tools; simply we have no need of that in our case: it would be "over-what-have-you", and is not Lean)
We maintain our specs, constraints, and other input doc in a subfolder, as well as our Definition of Done (DoD). Our backlog is a simple Excel sheet (it contains approximately 16000 items; all done items are at the top; there is a slim line that separates done from not done; below the slim line, all items are ordered: the items that must be taken first are at the top, right down the slim line).
All that is crunched by our Build tool: the DoD, the specs and other constraints, the backlog, etc. In output the Build tool has produced releaseable code (the series of 0 and 1), the Release Notes (what is new + what has changed), the updated doc (The Vae Soli! Documentor is triggered by the Build), the test reports (actually part of our doc). All these items are listed in our BOM (Bill Of Material) so that we always know what's in the box. The whole thing is packaged as a .zip file, and finally delivered to prod with a single FTP command. Deployment, not part of the illustration, is carried out in prod by a specific agent.
We haven chosen PHP to develop Vae Soli! which happened to be a good choice in terms of Build, Delivery, Deployment and Run.
In our way we use PHP there is no need to make our code available in any sort of external tooling to make it run: no WebSphere, no JBoss, no Tomcat, no nothing. Code runs as we have crafted it. The purpose of this remark, for all Sybille it seems, is not innocent (but is beyond the 3 products we have given as examples; we rather draw your attention to the principle).
For us, there is no need to stop and restart any external tool, aggregator, orchestrator, application server, … As soon as 1 source gets deployed it is usable. In our case, it is not immediately in use though as we work with a 3-bullet barrel principle: only at the moment everything gets deployed do we switch the symbolic link to the newest version of our framework. This serves as activation.
Usually we are able to deploy a full version in less than 8 minutes. This speed gives you a good sense of what agility really is: the ability to move quickly. The technology we use is not foreigner to that speed.
 … This is our challenge at the moment … making sure all these tests are still relevant