Vagrant and Docker are powerful tools on their own, but when you combine them you get something very special: fully-functional local development environments consisting of many inter-dependent services. A working example is here: github.com/DevBandit/vagrant-docker.

A Bit of Backstory

Setting up a local development environment so you can actually work on a project used to be a major pain. Vagrant has mostly solved this problem and my teams use it on almost every project now. Clone a repo, run vagrant up, and in a few minutes you have a fully functional dev environment. It’s like magic!

However, like many companies these days, we’re taking an increasingly SOA approach to our projects. Instead of a single monolithic codebase written in a single language we’re building several single-responsibility services—each with their own teams, technology stack, and infrastructure. We can debate the pros and cons of this approach, but that’s a different blog post.

With this architecture there are several challenges, including this: How do you build a local dev environment that has all these inter-dependent services talking so you can simulate a real-world production environment? In addition, how to you make it simple and repeatable? It’s not a trivial problem.

A simplistic approach is just to clone all the repos and run vagrant up for all of them. Let’s assume you’ve got a beefy machine that can run all those VMs without croaking. You still have to make sure the various services can talk to each other, and make sure you’re working on the proper branches of each repo so the APIs are compatible. After just a few services it starts to feel like the dark ages before Vagrant.

Docker to the rescue!

I confess I was a bit late to the Docker party. While we were scratching our heads trying to solve this local dev environment problem I started to wonder if Docker could be the solution. After all, Docker containers are similar to VMs but much more lightweight so you could run more of them without bogging down your machine. Docker containers also support linking so you can easily make them talk to each other without complicated networking setups. Finally, Docker support is already built in to Vagrant in the form of a Docker provider—which allows vagrant environments to be backed by Docker containers rather than virtual machines—and a Docker provisioner—which helps automate the process of setting up Docker containers within your VM.

The solution we ended up with is a single vagrant VM that contains all the services that make up our platform in separate docker containers. It also contains a couple of utility containers to make things easier like a shared PostgreSQL container and a web proxy container that routes requests to the various containers.

A Working Example

If you’re like me, you learn better by example. So I set up a little proof of concept here.

  1. Make sure you have both Vagrant and VirtualBox installed.
  2. Install the vagrant-docker-compose plugin.

    vagrant plugin install vagrant-docker-compose
    
  3. Make an entry in your hosts file like this:

    192.168.50.100 postgres.local django.local flask.local
    
  4. Clone the repo
  5. run vagrant up

In a few minutes you should have a virtual machine running four docker containers, including a sample Django application and a Flask application. Point your browser to http://django.local and http://flask.local to verify that the applications are running. Now, imagine that these little example applications are fully-functional web services that can talk to each other. Cool huh?

Let’s look under the hood

To get a better idea of what’s happening under the hood let’s SSH into our VM and take look around.

$ vagrant ssh

And we’re in! Vagrant has set up this virtual machine to host all of our containers. It doesn’t do much else but run the Docker daemon and host our containers. You can see the containers we have running like this:

$ docker ps -a
CONTAINER ID        IMAGE                      COMMAND                CREATED             STATUS              PORTS                    NAMES
913ec1dd1f6f        vagrant_proxy:latest       "/runserver.sh"        9 minutes ago       Up 9 minutes        0.0.0.0:80->80/tcp       vagrant_proxy_1       
7c72567104f7        vagrant_djangoapp:latest   "python manage.py ru   9 minutes ago       Up 9 minutes        80/tcp                   vagrant_djangoapp_1   
6f753d5b05e6        vagrant_flaskapp:latest    "python hello.py"      9 minutes ago       Up 9 minutes        80/tcp                   vagrant_flaskapp_1    
a9da22ee46c4        vagrant_postgres:latest    "/usr/lib/postgresql   9 minutes ago       Up 9 minutes        0.0.0.0:5432->5432/tcp   vagrant_postgres_1 

This shows us all the containers that are currently running. If you’re not familiar with Docker you can think of each of these as lightweight VMs running inside your main vagrant VM. (They aren’t actually VMs of course, they’re Docker containers)

You can see the output of any of these containers by running Docker’s logs command:

$ dockers logs vagrant_flaskapp_1
 * Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
 * Restarting with stat

This will output the container’s STDOUT and STDERR buffers and is very helpful for debugging containers when something goes wrong. Here we’re looking at the ouput from the flask-app container and we can see that the server is listening on port 80.

Let’s say you wanted to connect to this container and do something. Typically you’d open an SSH connection, but Docker provides a better way with the exec command.

$ docker exec -ti vagrant_djangoapp_1 /bin/bash

You should now have a bash prompt inside the Django container, just as if you’d ssh’d in. The exec command simply runs a command on a running container. The -ti flags tell docker to run the command in interactive mode (keeping STDIN open) and to allocate a pseudo-TTY. The next part specifies the container we want to execute on, which is django-app in this case. Finally we specify the command we want to run, which is /bin/bash.

This is a perfect opportunity to demonstrate one of the coolest features of Docker: linking. The flask-app container is linked to the django-app container so they can talk to each other. Docker actually adds an entry to the container’s hosts file for each linked container. You can test this out with a simple curl command:

$ curl flaskapp
Hello World! This is the Flask App!

There’s so much more you can do with Docker containers, but I’ll leave it to you read the docs

How Does It Work?

It all starts in the Vagrantfile.

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(2) do |config|
  
  config.vm.box = "ubuntu/trusty64"
  config.vm.network "private_network", ip: "192.168.50.100"
  config.vm.hostname = "vagrant-docker-example"

  config.vm.provision :docker
  config.vm.provision :docker_compose, yml: "/vagrant/docker-compose.yml", rebuild: true, run: "always"

end

First, we’re using ubuntu/trusty64 container but you can use whatever distro you want as long as it’ll support Docker. Instead of exposing a few ports we’re exposing the entire VM on a private IP so we can support multiple hostnames. You can customize this of course, just make sure you update your /etc/hosts to match.

Finally, we’re telling vagrant to provision our containers based on the docker-compose.yml file. You’ll need the vagrant-docker-compose plugin for this.

docker-compose.yml

Docker-compose is a very handy tool that lets you define a multi-container environment in a single YAML file. This is the file you’ll update when you add more containers to your environment (without having to touch the Vagrantfile).

postgres:
  build: ./postgres
  ports:
    - "5432:5432"

flaskapp:
  build: ./flask-app
  command: "python hello.py"
  links:
    - postgres
  volumes:
    - /vagrant/flask-app/src:/opt/flask-app

djangoapp:
  build: ./django-app
  command: "python manage.py runserver 0.0.0.0:80"
  links:
    - postgres
    - flaskapp
  volumes:
    - /vagrant/django-app/src:/opt/django-app

proxy:
  build: ./proxy
  ports:
    - "80:80"
  volumes:
    - /vagrant/proxy/sites-enabled:/etc/nginx/sites-enabled
  links:
    - flaskapp
    - djangoapp

You can we have our 4 containers defined with the following parameters:

  • build: the directory containing the Dockerfile used to build the container
  • ccommand: the command to run when starting the container. (This is optional. If you omit this then it’ll run the default command as defined in the Dockerfile)
  • links: lists the containers that need to be linked to this container (provides a handy entry in /etc/hosts file)
  • volumes: maps local file paths inside the container
  • ports: exposes and maps ports

The Proxy Container

The proxy container is a special utility container that routes requests from your browser to the correct container. It also the part that I think could use the most improvement in this setup, but I digress. Without this container you’d have do a bunch of port mapping and that would be a huge pain.

I’m using nginx for the proxy and the configuration is in proxy/sites-enabled. There is a file for each container that needs to accept HTTP requests. Here’s an example:

server {
    listen 80;
    server_name django.local;
    location / {
        proxy_pass http://djangoapp:80;
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

This tells nginx to listen on port 80 for requests to django.local. It then proxies those requests to django-app. This works because of the magic of Docker’s linking feature. In the docker-compose.yml file we linked the djangoapp container to the proxy container and so there is a django-app entry in the proxy’s /etc/hosts file that points to internal IP of the django-app container.

Wrapping Up

This is just a proof-of-concept and a simplistic example. I mean, we didn’t even touch the Postgres container! But I’m hoping it’s enough grasp the idea.

I’ve been using this setup in a few real-world projects and it’s . Here’s some things I’ve learned:

  • Make each container a git sub-module. This way each service has it’s own repository and deployment process.
  • Use environment variables for configuration. Docker can load environment variables from a file to set local development configuration values.
  • In your Vagrantfile add a shell script to run AFTER your containers are running to do any bootstrapping that may be needed for your environment. This is great for creating databases and running django’s migrate command to create your table schemas.
  • If things get really borked you can always stop the container, delete the image, and run vagrant provision to get everything back to normal. I’ve done it more times than I’d like to admit.

And finally, if you have ideas to improve this setup please let me know.