A Craft CMS Development Workflow With Docker: Part 1 - Local Development

When working with specific technologies and frameworks over a number a projects and developing workflows around them, it's easy to begin taking for granted the knowledge which has been built up around them. Especially when that knowledge has been codified or exists as part of your everyday workflow.

Every now and again I find it helpful to take a step back and review all of the learning that has built up over time. I thought it might be useful to write out, from beginning-to-end, all of the steps involved in my current Craft workflow. From the basics of getting Craft set up to run locally, through building and linking assets, testing and continuous integration, to atomic deployments using docker in production.

If you follow through this entire series you'll build, from scratch, a modern development workflow which should serve you well for both small and large Craft projects for the foreseeable future. Well, until docker isn't cool any more...

Git repo here.



Prerequisites

The great thing about working with docker locally is that that is pretty much all you'll ever need to install locally, no matter what technologies your project is using.

So the only thing we need before we get stuck in is to make sure docker is installed on your development machine.

Getting Started

Lets create a new directory for us to work in, along with a subdirectory which will hold the source files for our project. I like to keep the project's files separated from the tooling which is used to build it so that it's easier for other developers to tell which files are project specific and which have a supporting role.

mkdir craft-in-docker
cd craft-in-docker
mkdir src

The first files we need are for a basic Craft project. We can use composer to create these for us.

cd src
docker run --rm -v $(pwd):/app composer create-project craftcms/craft .
sudo chown -R $USER:$USER .

Why am I running composer using docker?

I don't like installing things on my development machine - I've had many nightmares in the past due to incorrect versions of tooling being installed and I relish being able to avoid all that crap by just running everything in docker containers and choosing an appropriate version of your tools per execution.

I agree that it's a bit verbose though. Feel free to add an alias to your ~/.bashrc or ~/.zshrc. That's what I do with all my tooling.

Just be careful of permissions when you're working in this way. Docker containers tend to run as a root user so you might not have write access to new files once they've been created. If this happens you can just chown as I've done above.

Now that we have a base Craft project set up, we need to create our docker containers which will house our new project.

We'll start by creating a directory structure for our config files.

cd ..
mkdir -p docker-config/php
mkdir -p docker-config/nginx

We'll be creating two custom docker containers for our project so we've created a directory for each of them. These directories will house the main config scripts for the containers along with any other scripts or assets that they need and that aren't directly related to our Craft codebase.

Lets start by creating a Dockerfile for our php container. This container's sole responsibility will be to run php-fpm: parsing our php files and returning appropriate responses.

Using your favourite editor, add the following to docker-config/php/Dockerfile.

FROM php:7.3-fpm

RUN apt-get update && apt-get install -y \
        libfreetype6-dev libjpeg62-turbo-dev \
        libpng-dev libbz2-dev \
        libssl-dev autoconf \
        ca-certificates curl g++ libicu-dev \
        libmagickwand-dev mariadb-client libzip-dev\
        && \
        pecl install imagick-3.4.3 \
        && \
        docker-php-ext-install \
        bcmath bz2 exif \
        ftp gd gettext mbstring opcache \
        shmop sockets sysvmsg sysvsem sysvshm \
        zip iconv pdo_mysql intl \
        && \
        docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
        && \
        docker-php-ext-enable imagick

RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer
RUN composer global require hirak/prestissimo

RUN echo "upload_max_filesize = 10M" > /usr/local/etc/php/php.ini && \
    echo "post_max_size = 10M" >> /usr/local/etc/php/php.ini && \
    echo "max_execution_time = 300" >> /usr/local/etc/php/php.ini && \
    echo "memory_limit = 256M" >> /usr/local/etc/php/php.ini

COPY --chown=www-data:www-data ./src /var/www/html

RUN composer install -d /var/www/html/ && \
    chown -R www-data:www-data /var/www/html/vendor && \
    chown -R www-data:www-data /var/www/html/composer.lock

The file starts by setting our base image - our starting point upon which we will be building. In this case we're starting with the official php-fpm 7.2 image which comes with a php-fpm interpreter installed and not a lot else.

From this starting point we install a few OS and PHP level dependencies and nice-to-haves including everything that Craft needs to run.

Next up we download composer and get it installed. This will allow us to execute composer from within our container so that we can do things like 'composer require' without having to rely on an external image like we had to earlier when we ran 'composer create-project'.

We also install prestissimo which just speeds up composer a lot.

Next we set a few PHP ini settings, because 2MB upload limits are never big enough.

The final steps are to actually copy our project into the container and install dependencies. We do this by simply copying all of our project files from src into the container's pre-configured webroot, running composer install and then tweaking some file permissions.

When this Dockerfile runs, it will have created an image which contains everything required to serve our read and interpret our project's PHP files. We can then take this resulting image and do other things with it, like mounting in temporary test code or deploying it to stage and production.

Our next task is to create a Dockerfile for our nginx container. This Dockerfile will create an image which is responsible for receiving incoming traffic and either responding immediately with the contents of a static file, or passing the request on to our PHP container for it to process.

Add the following to docker-config/nginx/Dockerfile.

FROM nginx:1.15

COPY ./docker-config/nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --chown=www-data:www-data ./src/web /var/www/html/web

As you can see we're using nginx as our base image and then copying in a new default configuration file along with our Craft project's web folder.

We need to override the default config in order to tell nginx that it should forward certain requests to our PHP container. Lets have a look at that next.

Add the following to docker-config/nginx/default.conf.

server {
    listen 80 default_server;
    root /var/www/html/web;
    index index.html index.php;
    charset utf-8;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    access_log off;
    error_log  /var/log/nginx/error.log error;

    sendfile off;

    client_max_body_size 10m;

    gzip              on;
    gzip_http_version 1.0;
    gzip_proxied      any;
    gzip_min_length   500;
    gzip_disable      "MSIE [1-6]\.";
    gzip_types        text/plain text/xml text/css
                      text/comma-separated-values
                      text/javascript
                      application/x-javascript
                      application/javascript
                      application/atom+xml;

    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_intercept_errors off;
        fastcgi_buffer_size 16k;
        fastcgi_buffers 4 16k;
        fastcgi_read_timeout 300;
    }

    location ~ /\.ht {
        deny all;
    }
}

This is a pretty basic setup for an nginx server which will attempt to serve requests using files from the local disk and, if no matching file can be found, the request will be tranformed into a request for index.php. It then defines special rules requests to *.php files which cause them to be forwarded to another host/port combination via fastCGI.

The most important part of this file to understand is this:

fastcgi_pass php:9000;

This line tells nginx where to send requests that should reach your PHP container. Specifically we're forwarding these requests to a host named php on port 9000.

How does our nginx container know how to find our the container named php? We will discuss that shortly...

Before that we just need to jump back to the final line of our nginx Dockerfile, in which we copied our web directory into the nginx container's filesystem. Why did we do that?

When a request is received from the outside world it'll always hit our nginx service first. Nginx is super quick at handling requests for static files - it's what it was designed for - so there's no point in us forwarding any static file requests to PHP, which is super slow at handling requests for static files. However, our two containers have their own, completely separate filesystems. So we need to make sure that each of them has access to all of the files that they require in order to do their jobs as quickly as possible.

As nginx will only be serving static files to the public, and these static files will all be located in the publicly accessible web folder, that's all we need to copy in.

So now we have everything we need to create two docker images. We can execute these images on any system that has docker installed in order to create two running containers, but currently these containers will have no knowledge of each other and no ability to communicate with each other. We can fix this by using docker-compose, a tool which organises the configuration and execution of running containers.

Add the following to docker-compose.yml in the root of your project directory:

version: '2'
services:
  nginx:
      build:
        context: .
        dockerfile: ./docker-config/nginx/Dockerfile
      ports:
          - 80:80
      volumes:
          - cpresources:/var/www/html/web/cpresources

  php:
      build:
        context: .
        dockerfile: ./docker-config/php/Dockerfile
      expose:
          - 9000
      volumes:
          - cpresources:/var/www/html/web/cpresources
      environment:
        ENVIRONMENT: dev
        DB_DRIVER: mysql
        DB_SERVER: database
        DB_USER: project
        DB_PASSWORD: project
        DB_DATABASE: project
        DB_SCHEMA: public
        DB_TABLE_PREFIX: craft_
        SITE_URL: http://localhost
        SECURITY_KEY: AAAAAAAAAAAAAAAAAAAAAAAAAAA

  database:
      image: mariadb:10.3
      volumes:
          - db-data:/var/lib/mysql
      ports:
          - 3306:3306
      environment:
          MYSQL_ROOT_PASSWORD: secret
          MYSQL_DATABASE: project
          MYSQL_USER: project
          MYSQL_PASSWORD: project

volumes:
  db-data:
  cpresources:

This file allows us to define a set of containers which should be executed as a single set. They will all share a common network and be able to address each other using their names as defined in this file.

Earlier we made a special note of the fact that nginx was sending all of its PHP file requests to a host called php on port 9000. This file is where that name is defined.

Similarly we're setting environment variables in our PHP container which tell Craft to use a database server called database. We're then creating this container using a prebuilt mariadb image.

Our nginx and php containers are set to use the relevant Dockerfiles that we have already created.

One important addition to this file that is required by Craft itself is for us to mount a shared cpresources directory into our php and nginx containers. This is needed because Craft (Yii actually) assumes that our web server and PHP servers both have access to the same filesystem, however as we've split them into separate containers, they do not. To solve this we're creating a named volume and mounting it into the same location within the file systems of the two containers. Using this method, when our php container writes to the cpresources folder, the nginx container will immediately have access to those files.

Let's see if our work so far has paid off. Settle in, this will take a few minutes...

docker-compose up

You should see your two images being built. They will then be executed, along with the mariadb image, in order to create three running containers which can all talk to each other.

In our docker-compose.yml we also told nginx to listen for incoming traffic on port 80 by binding it to our localhost's port 80.

(If you see an error stating that a 'port is already allocated' it is likely because you already have something running which is listening for incoming traffic - MAMP, Apache, nginx or something similar. You'll need to find that and close it before continuing or change the port binding for nginx in docker-compose.yml to something like - 8080:80 to listen on 8080 instead.)

Give this a click:

http://localhost/admin

Hopefully you'll see Craft's installation page, ready to do its thing.

Feel free to have a play around with your new Craft-in-Docker environment.

Ctrl+C in your terminal to stop the containers from running when you're ready.

Updating Files

A pretty mandatory thing that we'll want to do during local development is to edit our project's files. We wouldn't get far if we couldn't. So let's recap what our current build system is doing:

  1. We're creating an image containing PHP and all of Craft's dependencies
  2. Copying our project files into this image
  3. Running composer install to pull in 3rd party PHP packages
  4. Creating an nginx image which will route traffic to PHP and also serve static files directly
  5. Copying our public web folder into this image
  6. Using docker-compose to link these together along with a database

Both of our custom images rely on our files being copied into their filesystems. Once this is done we don't really have any way to make changes to those files without rebuilding the images and using them to launch new containers. That's a problem because we don't want to have to rebuild images every time we make changes to our project files.

You can prove this to yourself by editing src/templates/index.html and seeing that it doesn't have any impact when you refresh your browser.

To fix this we need to 'mount' our project files into our running containers. Docker makes this relatively easy for us, all we need to do is tell it which files on our host filesystem we'd like to mount into the container and where in the container we'd like them to go.

In essence what we'll be doing is replacing the project files that we copied in during the image build process with a link to the same files on our development machine. This mounting process is performed when your container is created from the image so you can think of it as an execution-time tweak to your base image. The image is the source of truth, but we can make little changes to it in order to get it to do useful things.

In docker-compose.yml update the nginx and php volumes to:

nginx:
      ...
      volumes:
          - cpresources:/var/www/html/web/cpresources
          - ./src/web:/var/www/html/web
  php:
      ...
      volumes:
          - cpresources:/var/www/html/web/cpresources
          - ./src/composer.json:/var/www/html/composer.json
          - ./src/composer.lock:/var/www/html/composer.lock
          - ./src/config:/var/www/html/config
          - ./src/modules:/var/www/html/modules
          - ./src/templates:/var/www/html/templates
          - ./src/web:/var/www/html/web

Here we're telling docker-compose to mount our host filesystem's web folder over the top of the nginx image's web folder. We're also mounting several of our project's directories into our php container. Why not all folders? Two reasons:

  • Mounted filesystems are slow, we want to mount as little as possible.
  • We don't want to check the vendor directory into our version control. If another developer clones the project we don't want our configuration to reference a directory that might not exist on our host filesystem. When the PHP image is built the vendor directory will be created inside the container anyway.

From now on, whenever we edit a project file on our host filesystem within one of these mounted directories, the change will be immediately reflected inside our two running containers.

Use Ctrl+C to kill your running containers if you haven't already and then run docker-compose up again to recreate your containers with the new mounted files. Give your browser a refresh.

/var/www/html/config isn't writable by PHP. Please fix that.

That's a bit sad.

To figure out what's going on here we need to understand three things:

  1. Unix permissions
  2. Docker containers have their own set of users and groups, completely independent of the host on which they're running
  3. Files mounted into a container carry their permissions with them

So, on our host filesystem our project files are most likely owned by the user that we're currently logged in as (if you're on windows YMMV). We can check with:

ls -l

total 12
-rw-rw-r-- 1 matt matt 1121 Dec  4 11:29 docker-compose.yml
drwxrwxr-x 4 matt matt 4096 Dec  3 23:28 docker-config
drwxrwxr-x 8 matt matt 4096 Dec  3 23:28 src

All of my project files are owned by matt and have the group matt. You can also see that write permissions are disallowed for users that are not matt and that don't belong in the matt group.

Given that this user and group do not exist inside our container, it's no wonder PHP is having trouble writing to these directories.

There are several ways to solve this problem, each has pros and cons:

  • Match the user ids inside the container with the user ids on your host (not recommended - binds your built images to your host's configuration)
  • Change the owner and group of the files on your host to match the container users (not recommended - makes files difficult to edit on your host)
  • Relax the permissions of specific directories and files on your host (recommended - but be careful if you're working with sensitive files)

The third option is by far the easiest and prevents you having to configure your image to match your host which would make it very difficult to run elsewhere.

All we need to do is:

chmod -R 777 src/config
chmod 777 src/composer.*

These are the things that we've mounted into our container from our host that we expect Craft to want to edit. By setting their permissions to 777 we're allowing any user (including those inside our containers) to make changes to them.

We should now be able to refresh our browser window and see the previous error has been resolved.

Now we're all set to start configuring Craft, creating template files and placing static assets in our web folder.

Image Uploads

In order to allow image uploads via Craft we'll need to set up an asset source. During local development it is likely you'll want to use a local filesystem asset source and it's also likely that the location of uploaded files will be within the public web directory.

During our configuration steps above we mounted our host filesystem's web directory into both the PHP and nginx containers so any files which PHP writes to this folder will actually be being written to our host filesystem and will be immediately reflected in our nginx container too.

We can define a new asset source in Craft like this:

This is pretty standard, just take note of the filesystem path which is the path inside the PHP container that Craft will be writing the uploaded files to.

Once that's saved we can try uploading an image to the new asset source.

We're receiving this error because Craft is trying to create a folder called images in our host filesystem's mounted web directory, but we haven't changed any permissions there to allow this to happen.

mkdir src/web/images
chmod 777 src/web/images
echo '*\n!.gitignore' > src/web/images/.gitignore

By creating the images folder on our host filesystem and setting relaxed permissions we are allowing Craft to successfully process images uploads to this directory. I've also added a .gitignore to firstly prevent test uploads from being checked into our version control and secondly to ensure this directory is created when any other developers clone the project.

Checking Logs

Because we haven't mounted anything over Craft's storage folder in the PHP container, Craft will be writing its log files to the container's filesystem rather than our host's. This means that there's no obvious way to check its contents. Docker does allow us to perform operations inside a container though which we can use to achieve this.

# Output the last 200 lines of the log
docker-compose exec php tail -n 200 /var/www/html/storage/logs/web.log

# Pipe the log file into a text editor for easy navigating
docker-compose exec php cat /var/www/html/storage/logs/web.log | gedit -
docker-compose exec php cat /var/www/html/storage/logs/web.log | mate

To improve this process of reviewing logs there's also the option of using Docker's logging system which will give you much more flexibility.

Next Steps

We should have everything we need to get on with some local development now. In the next part of this series I'll cover how to run front end build chains in docker so that we can build our javascript and css without installing any dependencies and ensuring our build process is version controlled and transferable between developers without encountering all of those classic node/npm version mismatch issues.

Feedback

Noticed any mistakes, improvements or questions? Or have you used this info in one of your own projects? Please drop me a note in the comments below. 👌

Edits

  • 2018-12-04: Added composer.lock to mounted files in docker-compose.yml to ensure packages added via composer inside the container are checked into version control.
  • 2018-12-04: Added link to docker logging article.
  • 2018-12-19: Fixed db-data mount path in docker-compose.yml.
  • 2019-05-13: Updated PHP container to 7.3 & fixed mcrypt dependency
  • 2019-06-16: Updated composer's docker image name to point to new location
  • 2019-06-16: Removed mcrypt lib which I missed previously

Read Next



2024 Goals
Write Things