Recently I have released an update to puppet-solr, a puppet module I have created earlier for setting up a multi-core solr instance on Ubuntu (tested for 12.04 LTS). One of the main goals I had for this release was to support the latest stable version of solr (4.4.0) as well as have proper tests included. So I started looking at several tools that would help me do this the right way. I also wanted to have a simple module development workflow and a development environment that is isolated from the host system for better testability.
While none of the material I'm going to cover in this post is completely new, it represents a distillation of a process that I picked up from several useful sources of information on the web. I have made a few improvements that I thought would make the process a bit easier. So, without further ado, here are the important tools that I used in this process:
Vagrant is a quite a popular tool that makes working with virtualized environments easier. It is a very useful abstraction layer over VirtualBox that is completely controllable from the command line.
rspec-puppet brings rspec-like test matchers that suit puppet module development. For practicing something like TDD, this is a very valuable tool. serverspec is also a similar tool, but works on the system level, where it checks for things like whether a particular directory is actually created or if the server is listening on a particular port, etc.
Finally, puppet-lint is a neat little tool which checks if your puppet code conforms to the Puppet style guide. I find this a very good way to stay up to date with Puppet syntax and best practices.
So after a bit of fiddling with these tools and trying to understand where they fit in, I think I evolved a simple puppet module development workflow that makes sense for me:
Write a failing system-level spec with serverspec
Write a failing unit-level spec with puppet-rspec
Fill out the code, make sure unit-level specs pass.
Run 'vagrant provision', and make sure system specs pass.
Rinse and repeat.
There are a few points to note here. I believe writing integration tests first will give a better idea of the functionality we want to build, so I make sure I write those first. In our case, serverspec comes with a nice integration hook for Vagrant that when I run system specs, it checks if the vagrant instance is up or not, brings it up if it hasn't been started yet, and runs our specs on the live system. Needless to say, these tests can be slow, so I have farmed them out into a separate rake task so that they are run only when required (see Rakefile).
The unit-level specs are run far more often, so they need to be fast. But I noticed that serverspec was trying to start vagrant even when I'm only running the unit-level specs, so I had to do a few changes to the spec helper hook that serverspec installs so it behaves as we expect it to. You can look at the slightly customized spec_helper.rb here.
Next, since I wanted a very straightforward way of working with Vagrant, I've added a Vagrantfile to the module itself, apart from the vagrant directory which contains the base puppet configuration code that sets up the vagrant instance. This makes it as simple as doing a 'vagrant up' and having either your development environment set up, or you can just use it for testing this module.
Finally, since we went to the trouble of setting up a proper test suite, why stop there? I wanted to hook it up to travis-ci and it was fairly straight-forward to set that up too. You can look at the build status here.
That's pretty much it. I hope you enjoyed reading it as much as I did figuring out this stuff. I'd be very keen to know what you think of this approach and hopefully hear about yours. I'm @vamsee on Twitter.