Blog icon

Ansible (Real Life) Good Practices

By Raphael Campardou,
Raphael Campardou
Scroll down to read

In the Ops Team, we love Ansible.

If you're not familiar with Ansible, it is a tool to automate, document and systematise server provisioning a.k.a Infrastructure as Code. At the basic level, one can write simple tasks in yaml files and Ansible will run them sequentially via SSH.

When you offer Operations as a Service like we do, Infrastructure as Code is one of the keys to a happy life.

Here are a couple of practices we implemented in our production stream, hoping they will help you discover or ease your own implementation.

A Little Style Guide

Use a module when available

Ansible comes with a lot of built-in modules. If you come from an Ops background, you might be tempted to default to the command module, to execute shell commands on the remote. While this would probably work, it is not the Ansible way. Modules ease a lot of the work and syntax, and will feed you with all sorts of useful information that you can use later on in the playbooks.

Set a default for every variable

In a role, you can have a defaults/main.yml file, holding—you guessed it—default values. Values in theses files will have the lowest precedence of the whole variable stack. You should set values for all of the variables used by your role, even if those values are set as part of another role. This helps to keep the roles independent of each other.
This holds true when registering a variable from a task's return.

Use the 'state' parameter

A lot of modules take a state parameter.
It usually has a sensible default, but don't be lazy and set it, even if it is with the default value. At the very minimum it will provide better documentation and clearer intents, and it might prevent errors.
It is recommended as a best practice by the Ansible team itself.

Prefer scalar variables

Ansible supports both scalar and dictionary types of variables. We found it's easier and safer to use scalar ones. The default behaviour for two dictionaries with colliding names is to replace, not merge. Meaning you cannot have a variable coming from different places (defaults and settings, or different roles for instance). Namespacing a set of scalar variables will almost always be an easier way to manage them.

In the example below, if I want to override one of the defaults, I have to write the whole dictionary:

# roles/ruby/defaults/main.yml
  version: 2.1
  experimental: false

# vars.yml
  version: 2.1
  experimental: true

We prefer

# roles/ruby/defaults/main.yml
ruby_version: 2.1
ruby_experimental: false

# vars.yml
ruby_experimental: true

Even if it would seem like a cleaner solution to encapsulate related variables in a common object, you will get much more flexibility from a simple namespace convention.

Of course, if your variables come from an external call (an API or other kind of request), i.e. the ansible-* facts, you will have to deal with dictionaries.

Tag all the things

We thought the tags were a thing of the 90's. Wrong. If you use AWS, you already know that tags are first-class citizens. Same goes in your playbook.

If they are not required, they are ridiculously handy and they should be. With the appropriate tags, you can isolate the parts of the playbook, or exclude others.

Keep the number of tags low, document the usage of tags, maybe have a closed set, and double check the spelling.
We tag each and every task with the name of the role to which it belongs, plus whether it is config, service, package or gem and we also add a tag to the main task types that can be spread over many roles. For instance, we have tasks to set up logrotates in multiple roles. They have their own tag.

In combination with the --limit argument, you can say "Run just the configs but not the logrotate tasks, only on the app servers"

A debug tag can be very handy while writing the playbook, to just run a couple of tasks.

Use multi line YAML notation

This is mainly a personal preference. I think it adds a lot of clarity. For longer lines of course but even for short ones. Vertical code is easier to maintain than horizontal code, and this is true for description files also. For example:

- name: Dummy task
  file: src=blah/{{ blup }} dest=blip/blop state=present

# vs

- name: Dummy but good looking task
  file: >
    src=blah/{{ blup }}

Vault Pseudo leaf encryption

Very recently, with version 1.5, Ansible introduced Ansible Vault, a way to encrypt data in the playbook, and decrypt it at run time. This feature was highly requested, and gives Ansible its true place among platform management tools.

The thing is: what we like about Ansible is the readability, and encryption has a way of making things, well, less readable…
ansible-vault command will encrypt or decrypt the whole var file, you can not encrypt just the value of a variable. The solution is simple enough: create a second var file, just for the sensitive data. But this raises another issue: your variables are now spread over multiple files, and some of them encrypted. This can get messy. For instance, if you define a dictionary of variables and only one of them is sensitive, you have to encrypt the whole dictionary.

Leaf encryption was (is) a feature request, but in the meantime, there is an elegant way of keeping it both readable and secure: nested variables.

For every sensitive variable, you create a prefixed double that goes in an encrypted file.

# var_file
db_password: {{ vaulted_db_passord }}
# and for a dctionnary
  - "access_key_id='abcdefgh'"
  - "secret_access_key='{{ vaulted_aws_secret_access_key }}'"

# vault_file
vaulted_db_passord: a_super_secret
vaulted_aws_secret_access_key: the_aws_secret

That way, you can manipulate all your vars like before, knowing the vaulted version stays encrypted. You can even solve the problem of having someone responsible for the encrypted file and the rest of the team never seeing its content but still being able to manage var files as they need.

Git Pre-commit Hook for Vault

This last practice is not directly Ansible related, it's more a piece of workflow advice.

There are 2 ways to handle a vault file: ansible-vault [encrypt|decrypt] or ansible-vault edit With the first method, there is a good chance that, at one point, a decrypted version will end up in a Git commit. And if it's possible to wipe it from the history, it's a real pain.

I wrote a simple pre-commit hook that checks if a file called "vault-something" is encrypted before committing. If not, it displays a helpful message.

Copy it as .git/hooks/pre-commit and make it executable (or append to the existing pre-commit if any). It can live in your Ansible project repo's or in your global .git/.

# Pre-commit hook that verifies if all files containing 'vault' in the name
# are encrypted.
# If not, commit will fail with an error message
# File should be .git/hooks/pre-commit and executable

# carriage return hack. Leave it on 2 lines.
for f in $(git diff --cached --name-only | grep -E $FILES_PATTERN)
  # test for the presence of the required bit.
  MATCH=`head -n1 $f | grep --no-messages $REQUIRED`
  if [ ! $MATCH ] ; then
    # Build the list of unencrypted files if any
if [ ! $EXIT_STATUS = 0 ] ; then
  echo '# Looks like unencrypted ansible-vault files are part of the commit:'
  echo '#'
  while read -r line; do
    if [ -n "$line" ]; then
      echo "#\t${yellow}unencrypted:   $line${wipe}"
  echo '#'
  echo "# Please encrypt them with 'ansible-vault encrypt <file>'"
  echo "#   (or force the commit with '--no-verify')."

I hope this can be helpful. Do you have any Ansible best practices to share ?

To find out how reinteractive can turn your web application vision into reality, get in touch with us through our contact form or call us on +61 2 8019 7252.