Napsty logo

Cloud Antics

A technical blog guiding cloud solutions

Orchestrating and integrating Chef ecosystems with Ansible in AWS

These days, the number of tools that we use to deal with our infrastructure and services are endless; seemingly as if a new batch pops and takes over the world almost overnight. It’s hard to limit yourself to one of them since they all have their specialties and weaknesses.

That’s why every once in a while, you need a little help gluing things together. Today, we’re going to look at one approach to harmonize the relationship between Ansible and Chef.


In this case, we’re going to have Chef do what it’s good at doing in provisioning the system, and Ansible be responsible for orchestrating the hosts and knowing about both environments’ state.

For now we’re going to examine doing this with a pre-existing Chef setup, and adding Ansible sugar on top. Eventually this will be migrated to using chef-solo and building images to be reused, but for now this will do.


First, we’ll need to ensure that we have the two systems integrated by ensuring that the Ansible inventory looks at both the Chef server, as well as the current state of AWS.

How do we do this, you ask? Simple: Ansible provides the ability to reference a folder of inventories and in our case, we’ll just toss the two dynamic inventory scripts that we need in there:

To simplify this a little, here’s a quick bash snippet:

mkdir -p inventory && pushd inventory
  curl -O
  curl -O
  curl -O
  chmod +x *.py

NOTE: There’s a bug in the version of that the maintainer provides, and since they haven’t fixed it, in the above script references the fixed file, instead of the canonical one.

Next we need to make sure that all of the request variables are set correctly:

# chef_inventory specific stuff
export CHEF_PEMFILE="~/.knife/chef-server-validation-key.pem"
export CHEF_USER="chef-username"

# ec2_inventory specific stuff

Now that you have everything ready to go, let’s test them both to ensure they work properly:

# should return a bunch of groups and hosts

inventory/ --list
# should also return a bunch of groups and hosts

If that all works as intended, then you should be good to go! You’ll need to add a -i inventory for each ansible run, or drop the following into ansible.cfg:

inventory = ./inventory

Instantiating and provisioning new hosts

So here’s where the tricky part comes in: how do we spin up a new node in AWS, while still being able to add it to Chef server, register it as a new client, and provision it with whatever runlist we want?

Here’s one way to approach it that I’ve found works well, it takes advantage of the knife command so you’ll need chefdk installed on the machine this is being run from.

We do this in two steps, effectively:

  • Spin up the host using Asnbile in the correct subnet, with proper SG’s, etc
  • Use the reference to that host to bootstrap the node using the knife command

Here’s an example role to do just that:

# role/create-server

- name: spin up server
    state: present
    image: "{{ ami }}"
    region: "{{ region }}"
    zone: "{{ availability_zone }}"
    group_id: "{{ security_groups }}"
    instance_type: "{{ instance_size }}"
    key_name: "{{ aws_keypair }}"
    vpc_subnet_id: "{{ subnet }}"
      Name: my-ansible-bootstrapped-server
      role: chef_role
  register: ec2_instances

- name: wait for instance ssh port to be up
  wait_for: port=22 host="{{ item.private_ip }}"
  with_items: "{{ ec2_instances.instances }}"

- name: knife bootstrap
  shell: >
    knife bootstrap -y {{ item.private_ip }}
    --environment chef_prod_env  # will be used later
    --node-name {{ item.tags.Name }}
    --run-list 'role[{{ chef_role }}]'
    --ssh-user {{ initial_user }}
  with_items: "{{ ec2_instances.instances }}"

- name: register hosts into new_hosts hostgroup
    groups: new_hosts
    name: "{{ item.private_ip }}"
  with_items: "{{ ec2_instances.instances }}"

Now that you have a simple role, you’ll need to run that from localhost that has the right keys needed to talk to AWS and chef:

# create-web-server-play.yml
- hosts: localhost
  connection: local
  gather_facts: false

    availability_zone: us-east-1a
    security_groups: sg-12345678
    instance_size: t2.micro
    aws_keypair: my_keypair
    subnet: subnet-12345678
    region: us-east-1
    ami: ami-12345678
    initial_user: ubuntu
    chef_role: my_system_role

    - role: create-server

Adding instances to an ELB

For now I’m going to assume you have an ELB already setup and ready to go, since that’s a bit more of an involved process. I’m also going to assume you already have a way to add the instances to an ELB (if that’s something you’re interested in, let me know in the comments below!) via a role.

This is where it gets handy to have the environment info from Chef AND Ansible available to you:

# add-hosts-to-elb-play.yml

# tag_role_my_system_role comes from AWS inventory
# chef_prod_env is assumed in the chef server, and used above
- hosts: "tag_role_my_system_role:&chef_prod_env"
    - role: add-host-to-elb

There you go! Now you can reference hosts in both Chef, and AWS.

Future steps

  • I’d love to (or to have someone) create a chef module in ansible to more holistically manage the server and bootstrap process. Might be a project of mine at some point.