How Ansible supercharged our productivity

The Bandwidth Incubator creates new products with small teams of developers. Sometimes there are as few as two working on a project. Without many teams of developers, each with their own specific purpose, it is up to a few to do many things the right and most efficient way. We test, automate, build, and operationalize our projects.
I’m sure you would love to hear about our process in the Bandwidth Incubator, but I want to discuss how Ansible has found a place in almost every aspect of our work as well as tricks we have adopted. This post in a sentence:
“Use Ansible to increase productivity while reducing complexity and points of failure in the development process.”

Ansible is a powerful automation engine that makes systems and applications simple to deploy. It orchestrates deployment of our test and production environments, manages SSL certificates and DNS records, provisions developer machines, orchestrates builds, and automates interaction with various AWS services. All of this is possible with only three preconditions being met:
- Ansible is installed
- Python and boto package are both installed
- AWS credentials are set as environment variables
Store Secrets in S3
It’s never a good idea to store secret information like passwords in version control. To retain access control over secret information while keeping it out of your VCS the S3 Ansible module can be used. All that is needed are the AWS_ACCESS_KEY and AWS_SECRET_KEY environment variables.
This lets our playbooks be used in test, development, and production contexts. Each context can simply load different secrets. This simplification and reuse of scripts lets us move fast with confidence.
We can pull in secret variables from S3 with a playbook like this:
# load-secrets.yml
- name: Get secrets from S3.
connection: local
hosts: localhost
gather_facts: false
tasks:
- name: Download secret variables from S3
s3:
bucket: some-bucket-name
object: /ansible/vars/secrets.yml
dest: vars/secrets.yml
mode: get
- name: Make sure the secrets are ignored by Git
lineinfile:
dest: .gitignore
regexp: ^vars/secrets.yml$
line: vars/secrets.yml
- name: Load secrets
connection: local
hosts: all
gather_facts: false
tasks:
- include_vars: vars/secrets.yml
Include this playbook in another when secrets are needed.
- include: load-secrets.yml
Now access to these variables can be controlled with IAM policies. There can be a separation of roles (developer, ops) and which secrets they can access.
Less Arcane Bash, more YAML
Most people aren’t bash wizards. This leads to very fragile scripts that have unexpected behavior in some cases. This is not ideal when builds might depend on external resources that cost money or lock resources that stop other builds from running.
A good example when bash is far more difficult is creating an EC2 instance and then waiting on SSH access:
With Ansible
- name: Create a build instance
ec2:
group: some-security-group
image: ami-xxxxxxxx
instance_type: c4.large
vpc_subnet_id: subnet-xxxxxxxx
key_name: my-favorite-keypair
assign_public_ip: yes
region: us-east-1
instance_tags:
Name: project-build
wait: yes
register: build
- name: Wait for SSH on created instances
wait_for:
host: "{{ item.public_ip }}"
port: 22
timeout: 480
with_items: build.instances
With Bash
I’m not going to show an equivalent Bash script, but it’s a lot more complicated and easier to break without knowing the intricacies of Bash.
The expressiveness of YAML paired with the idempotency and reliability of Ansible make playbooks easier to reason about and maintain.
Local or Remote Build
Developers should have the capability of running the build locally or remotely. This can be easily achieved with Ansible. A playbook using the ec2, synchronize, and script modules can copy source code to a newly created EC2 instance and then run the same command you would run locally to start the build. A remote build can be kicked off from a developer laptop, TravisCI, Jenkins, or any other place a build would run. Running everything locally first can be achieved by implementing a general-use playbook for the build.
Clean Up
With all of the automation the Bandwidth Incubator uses, it is important for us to be confident we aren’t creating an unnecessarily large bill. Time to clean up! Your build might fail to clean up after itself if you don’t handle error cases correctly. Some build resources cost money, so this is bad! The state of whatever you create in your amazing general purpose build playbook can all be captured in a generated inventory file like this:
- name: Create inventory file
copy:
dest: /tmp/build-inventory
content: |
- name: Create inventory file copy: dest: /tmp/build-inventory content: | [build] {% for host in groups['build'] %} {{ host }} foo={{ foo }} bar={{ bar }} {% endfor %}
This method can be used for storing things like the ansible_ssh_user, ansible_ssh_private_key_file, region and id of a created instance. When you want to clean up, just include the following in a playbook and specify the generated inventory for use:
- name: Destroy build EC2 instances
ec2:
instance_ids: "{{ hostvars[item]['id'] }}"
region: "{{ hostvars[item]['region'] }}"
state: absent
with_items: groups['build']
when: hostvars[item]['id'] is defined
when: hostvars[item]['region'] is defined
- name: Destroy build KeyPair
ec2_key:
name: "{{ hostvars[item]['key_pair'] }}"
region: "{{ hostvars[item]['region'] }}"
state: absent
with_items: groups['build']
when: hostvars[item]['key_pair'] is defined
when: hostvars[item]['region'] is defined <br>
Of course, one needs to be confident that cleanup is always done. Bash is great for gluing together multiple playbooks. Here’s an example build script that uses Ansible:
#!/bin/bash -ev
cleanup() {
ansible-playbook playbooks/cleanup.yml -i /tmp/build-inventory
}
trap cleanup EXIT
ansible-playbook playbooks/build.yml -i '127.0.0.1,' -e "source_dir=$(pwd)"
This script will create an EC2 instance, using some version of the above methods described, and always ensure that the cleanup playbook is run when the Bash script exits. Certain values in the playbook should probably be parameterized in addition to the source_dir: key_pair_name, key_pair_path, security_group, etc.
Conclusion
The Bandwidth Incubator picked Ansible in March of 2014 as its configuration management tool of choice. We have been more productive and remain happy with our decision! There is a large and healthy community behind Ansible. It has a module or a pattern that can be used to solve most problems associated with the development process, while reducing complexity and points of failure.
You might be more productive with Ansible if you:
- have an existing pile of Bash scripts for build orchestration
- have a small team
- are building against multiple operating systems
- use external resources in the build