Using a Jenkins Pipeline for cleaner CI

So you hacked together a shiny, new web application, pushed it out on your production system using some messy bash commands to transfer the artifacts you built locally on your somewhat-reliable company laptop, and guess what? It works! Well, mostly…As your application begins to grow, you might start scripting some of that messy deploy process. Oh, and let’s not forget the unit and end-to-end testing you’ll now be building into your project. If you would rather avoid these hacked together scripts and the productivity hit you take every time you nearly melt your laptop trying to run the full suite of tests against the local instance of your web app, I’ll show you how to use Jenkins with pipelines to do just that.
The Problem with Scripts
First, let’s talk about the issues that caused the Bandwidth Dashboard development team to decide it was time to replace our scripted deployment method with something a little more robust.
- It Couldn’t Scale: For every new piece added to our application, the scripts had to be updated. Whether you’re using bash, python, or something else, this can get out of hand quickly. Every line of code added to put new files in different locations, update permissions, and restart processes created more complexity and a greater risk of something going wrong.
- It Was Unmaintainable: The increasing complexity, from things like the addition of conditional logic to alter the deployment based on the state of the system, led to a set of scripts that only a handful of developers could interpret. A process like this that grows organically to quickly meet the immediate needs of deploying the application creates a big problem.
- It Was Expensive: This may not be as obvious, because I’m not directly talking about dollars. The expense came in the form of developer hours and productivity. For our team, the overhead required by the manual processes involved in preparing for and executing a release was starting to have a significant negative impact on new feature development.
Why Jenkins Alone is Not Enough
Most developers are probably already familiar with Jenkins as a build server and continuous integration (CI) tool. If you’re not, check it out here. You can (and should) use Jenkins as your primary tool for building and testing your code. Before getting started on building a complicated pipeline, I highly recommend that you configure the jobs that will be part of the pipeline by following this. Jenkins also makes automatic triggering of build and test jobs very simple to configure. You can even chain those jobs together so that a new commit to your repository triggers a build, which then triggers a deploy to your test environment, and finally triggers an automated test run. The problem with this approach is that there is still no clear visual of the flow of these jobs for one particular commit ID. You will find yourself viewing multiple executions of jobs that are linked together, trying to match commit IDs and timestamps to understand where your code is and which version was tested.
Pipelines to the Rescue
Using pipelines in Jenkins provides a solution to the problem of tracking a particular commit through the build ➔ deploy ➔ test process by creating an easily readable visual of the set of jobs that corresponds to each code change. A simple example of the visual that results from the Jenkins Pipeline Plugin looks like this:

As you can see, utilizing the pipeline method allows you to track the build, deploy, and test steps for a specific execution of your CI flow. You can even tag the commit that corresponds to an execution of the pipeline, store the artifacts that result from a build on that commit, and pass variables between steps in the pipeline. We found this to be particularly helpful in ensuring that we could make our CI process very granular, splitting different parts of the build and test execution into multiple jobs. We were able to simply pass a build tag and access information for the build artifacts between jobs, allowing for more parallelization, and resulting in a pipeline that looked more like this:

Each step in our pipeline could trigger multiple down-stream tasks, and evaluate conditions (based on things like build status and test results) to determine if the next steps should be executed. Finally, we incorporated manually triggered steps to deploy to staging and production environments after the successful execution of all other steps in the pipeline. Using Jenkins and this pipeline strategy, we were able to trade in the bash scripts for a world in which each commit triggers a new pipeline execution, and a single button press puts the resulting tagged artifacts into service in production.
Additional Tools for a More Powerful Pipeline
While designing our pipeline, we found that there were some other plugins that fit in very well with the pipeline concept and met some important needs for our team. For example, we needed to delegate job execution to Jenkins slave instances so that we could run multiple pipelines in parallel. We also needed to have somewhere to put our builds, as well as a set of tools to enhance the deployment portion of our pipeline. Here are a few of the most important of those plugins.
Amazon EC2 Plugin: This plugin is extremely useful in delegating work from the master Jenkins node to dynamically allocated slave instances that can execute the steps in your pipeline. Slave instances are an absolute necessity when you have multiple code branches for feature development each triggering their own pipeline executions.
Amazon S3 Plugin: S3 is a great place to store build artifacts and configuration information so that all of your environments can easily access these things. The S3 plugin allows the build steps in your pipeline to upload the resulting files so that the following jobs can access them with only a build ID or tag passed in as a parameter.
Custom Tools Plugin: This is useful in combination with dynamically allocated slave instances for installing tools and packages like Javascript libraries or database migration tools that your deploy and test jobs might need to utilize. It is a good alternative to having these tools baked into the AMI that you use as a template for the Jenkins slave nodes, and it allows you to keep tools updated continually without having to think about it.
Ansible Plugin: I recommend using a configuration management platform for most of your deployment steps. Many Bandwidth teams make use of Ansible for this, and the Ansible plugin for Jenkins takes care of installing and updating Ansible for you on all of your nodes.