A Craft CMS Development Workflow With Docker: Part 4 - Docker In Production

As part of my Craft CMS Development Workflow With Docker series of articles I'll be covering, from start to finish, setting up a development workflow for working with Craft CMS in docker.

Git repo here.



If you're following along with this series after completing Part 3 you'll have a few docker images built and stored in GitLab's container registry, ready to be deployed. In Part 4 we'll be getting these live on a server in as simple and reliable a way as possible.

We'll Need A Server

I don't want to get too distracted by talking about provisioning servers or hosting, but we'll need something to deploy to in order to test this out. Here's a few options:

I'm going to be using a server running Ubuntu. You should be able to use any old unix OS and still follow this article as long as it's supported by Docker and docker-compose.

Once you've got yourself a server which you can SSH into you're all set.

Install The Server Requirements

Probably the easiest thing you've done all year:

curl -L http://bit.ly/dockerit | sh

That'll get Docker and docker-compose installed using a little shell script. However if you'd prefer to do this manually feel free to do so.

And that's it. Our server is ready to receive our project.

Organisation

When I'm working with docker on a remote server I like to try to keep organised by creating a directory per-project. Just in case I end up deploying multiple docker projects to a single server. I tend to keep all of these in the home directory of the user that I'm logging in as, just so they're super easy to find and list.

cd ~
mkdir craft-in-docker
cd craft-in-docker

Production docker-compose.yml

We can use docker-compose to spin up our production containers in a similar way to how we used it locally. The main difference will be that rather than point it towards local Dockerfiles we'll use links to pre-built images in GitLab's container registry.

Add the following to ~/craft-in-docker/docker-compose.yml on your server:

version: '2'
services:
  nginx:
      image: registry.gitlab.com/[your-username]/[project-slug]/nginx:latest
      restart: unless-stopped
      ports:
          - 80:80
      volumes:
          - uploads-data:/var/www/html/web/images
          - cpresources-data:/var/www/html/web/cpresources
      
  php:
      image: registry.gitlab.com/[your-username]/[project-slug]/php:latest
      restart: unless-stopped
      expose:
          - 9000
      volumes:
          - cpresources-data:/var/www/html/web/cpresources
          - uploads-data:/var/www/html/web/images
      environment:
        DB_DRIVER: mysql
        DB_SERVER: database
        DB_USER: project
        DB_PASSWORD: SomeSecureString
        DB_DATABASE: project
        DB_SCHEMA: public
        DB_TABLE_PREFIX: craft_
        ENVIRONMENT: production
        SECURITY_KEY: AAAAAAAAAAAAAAAAAAAAAAAAA

  database:
      image: mariadb:10.3
      restart: unless-stopped
      volumes:
          - db-data:/var/lib/mysql
      environment:
          MYSQL_ROOT_PASSWORD: secret
          MYSQL_DATABASE: project
          MYSQL_USER: project
          MYSQL_PASSWORD: SomeSecureString

volumes:
  uploads-data:
  cpresources-data:
  db-data:

This looks very similar to our local docker-compose.yml. There are just a few differences to point out.

Image URLs

We've replaced the build: section of our nginx and PHP services with image: to reference a remote image rather than a local Dockerfile location.

Restart Policies

We've added restart policies for our containers so that they'll automatically resume if the server is rebooted. We've set this to unless-stopped. You can set it to always but this will restart your containers even if you run docker-compose stop which I have found a bit confusing in the past.

uploads-data Named Volume

During local development we had our src/web directory mounted into both our nginx and PHP containers. This allowed us to share any files in this directory between the two containers. In production we don't have our project files on the host's filesystem and we also want to mount as little as possible (mounts can be slow and cause permissions issues), but we still need to be able to upload files using PHP and subsequently serve them using nginx.

We solve this be creating a named volume mount specifically for our uploaded assets. This will cause any files written to /var/www/html/web/images in either container to actually be written to the host's filesystem and also shared between the containers.

By writing these files to the host's filesystem we also ensure that they are persistent. Remember that container filesystems are ephemeral - they are reset whenever a container exits and is restarted. So we need to make sure that any files which are not inside our pre-built images, and that also need to be persisted, are pulled out to the host's filesystem in this way.

Container Registry Authentication

Before we can go any further we need to make sure our server is able to connect to the container registry in order to download our images.

This is always done using docker login but different services require you to supply that command with different credentials. GitLab uses 'Personal Access Tokens' for this. These allow docker to authenticate as you when connecting to the registry and access any images that belong to you.

First, generate a new access token here. It'll need the read_registry permission and nothing else. It's advisable to create a new token for each server that you want to have access to your images so give it a name that allows you to identify which server it has been used on. Something like craft-in-docker deploy token would be good.

Copy the token that is generated and then on your server run:

docker login registry.gitlab.com

The username is your GitLab username.

The password is the access token that you just generated.

Up And Running

Now that we've defined the services that we want to run on our server, along with their relevant images, restart policies and mounted volumes, we're ready to launch our site.

docker-compose up -d

That's all there is to it.

Docker will download the three images that it needs, create the volumes and get everything started for you. If one of your containers encounters an error or your server reboots docker will automatically bring your containers back up again.

Check if everything's working by visiting your server's ip address in a browser. You should see the standard Craft "I'm not installed yet" error page. Head over to /admin/install and get everything set up to your liking. Don't forget to create an asset source pointing to /var/www/html/web/images and with a URL of @web/images to test that out too.

Updates

"This is so cool and easy! But if I can't alter my project files on the host's filesystem, how do I push updates?"

Remember that the whole idea behind docker images is that they are immutable. This is a very good thing.

In order to make an update to your project you first need to create new versions of your project's images. Luckily we've already set up a CI pipeline in GitLab to do that.

The process goes like this:

  1. Make changes to your project locally
  2. Commit and push
  3. GitLab runs the CI pipeline which creates new project images
  4. You pull the new images to your server, completely replacing the old ones

Give this a go now. Make a small but visible change to the project on your local machine, maybe mess with src/templates/index.html, commit and push to GitLab. Once the CI task has completed log back into your server and run:

cd ~/craft-in-docker
docker-compose pull
docker-compose up -d

The pull command instructs docker to look for any updated versions of the images that it's currently using. If it finds any it'll download them ready for you to use. While you are doing this your site is still up and running without interruption.

When you run docker-compose up -d it'll prepare new containers based on the new images that were just pulled and then do a straight swap - removing the old containers and replacing them with the new ones.

Atomic, zero-downtime deployments have never been so easy.

Automated Updates

What's better than updates which can be applied with two commands?

Updates which can be applied with zero commands!

We can leverage our existing GitLab CI pipeline in order to move updates from local development to production with a simple git push.

To make this work we first need to provide some mechanism to connect to our production server in order to run commands from CI. Using SSH for this is both secure and relatively simple to get set up so it's my preference.

We'll start by generating an SSH key pair on our server and adding it to the authorized_keys file which will allow anyone who has the private portion of the key to log into the server:

mkdir ~/.ssh
ssh-keygen -f ~/.ssh/gitlab-rsa
# Accept defaults on the prompts, don't add a passphrase
cat ~/.ssh/gitlab-rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/gitlab-rsa

This will set up the key and then print out the private portion. Select all of this private key, including the first and last lines with all the dashes and copy it.

Get your project open in GitLab and select Settings > CI/CD in the left hand nav. Expand the Variables portion of the page. Create a new variable called PRODUCTION_SSH_KEY and paste the private key you just copied as the value.

Get that saved.

Next we need to set up your .gitlab-ci.yml file to make use of that SSH key and also define a new task which will perform the production deployment:

image: tmaier/docker-compose:18.09

services:
  - docker:18.05-dind

stages:
- build
- deploy

variables:
  DOCKER_DRIVER: overlay2
  PHP_CONTAINER_RELEASE_IMAGE: registry.gitlab.com/[your-username]/$CI_PROJECT_NAME/php:latest
  NGINX_CONTAINER_RELEASE_IMAGE: registry.gitlab.com/[your-username]/$CI_PROJECT_NAME/nginx:latest
  BUILDCHAIN_IMAGE: registry.gitlab.com/[your-username]/$CI_PROJECT_NAME/buildchain:latest

before_script:
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
  - apk update
  - apk upgrade
  - apk add openssh-client
  - eval $(ssh-agent -s)
  - mkdir -p ~/.ssh
  - echo "$PRODUCTION_SSH_KEY" > ~/.ssh/production_rsa
  - chmod 600 ~/.ssh/production_rsa
  - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'

build:
  stage: build
  script:
    - docker pull $PHP_CONTAINER_RELEASE_IMAGE || true
    - docker pull $BUILDCHAIN_IMAGE || true
    - docker-compose run --rm buildchain yarn run build
    - docker-compose push buildchain
    - docker build -f docker-config/php/Dockerfile --pull --cache-from $PHP_CONTAINER_RELEASE_IMAGE -t $PHP_CONTAINER_RELEASE_IMAGE .
    - docker build -f docker-config/nginx/Dockerfile --pull -t $NGINX_CONTAINER_RELEASE_IMAGE .
    - docker push $PHP_CONTAINER_RELEASE_IMAGE
    - docker push $NGINX_CONTAINER_RELEASE_IMAGE

deploy:
  stage: deploy
  script:
    - ssh -i ~/.ssh/production_rsa root@your.server.ip 'cd ~/craft-in-docker && docker-compose pull && docker-compose up -d'
  only:
    - master
  # Uncomment this if you want the deployment to be manual via the GitLab UI
  # when: manual
  environment:
    name: production

We've added a new item to stages called deploy. The tasks within this stage will only execute if the stages before it all complete successfully. So in our case we'll only deploy if build succeeds.

We've also added a few lines to the before_script. These just get the SSH key that we added as a Variable into a state in which it can be used by the SSH client.

Finally we added in a new task which logs into our server and performs the pull and up -d commands on our behalf.

Double check the user and server ip address in this task and also the folder into which it is cding.

Get that committed and push to GitLab. You should see GitLab create two CI tasks this time, one of which will be the automated deployment.

Next Steps

We can run our project locally, make changes, push them to GitLab, build portable images of our project and auto-deploy them to different servers.

This can easily be extended to push out to multiple servers, maybe one for staging and one for production. Perhaps you can automatically deploy to staging, but require a manual interaction within GitLab to push to production after the client has double checked everything?

The flexibility of using docker images in combination with a robust CI pipeline gives you all of these potential options whilst requiring relatively little DevOps budget.

Also, remember that I said if a CI stage fails the rest won't continue processing? That sounds like the perfect opportunity to perform some automated testing. Which we'll cover in Part 5.

Feedback

Noticed any mistakes, improvements or questions? Or have you used this info in one of your own projects? Please drop me a note in the comments below. 👌

Edits

  • 2018-12-19: Fixed database volume mount path in docker-compose.yml.

Read Next



2024 Goals
Write Things