Andrew Welch · Insights · #devops #frontend #docker

Published , updated · 5 min read ·


Please consider 🎗 sponsoring me 🎗 to keep writing articles like this.

An Annotated Docker Config for Frontend Web Development

A local devel­op­ment envi­ron­ment with Dock­er allows you to shrink-wrap the devops your project needs as con­fig, mak­ing onboard­ing frictionless

An annotated docker config for frontend web development

Dock­er is a tool for con­tainer­iz­ing your appli­ca­tions, which means that your appli­ca­tion is shrink-wrapped with the envi­ron­ment that it needs to run.

This allows you to define the devops your appli­ca­tion needs in order to run as con­fig, which can then be eas­i­ly repli­cat­ed and reused.

The principals and approach discussed in this article are universal.

While there are many uses for Dock­er, this arti­cle will focus on using Dock­er as a local envi­ron­ment for fron­tend web development.

Although Craft CMS is ref­er­enced in this arti­cle, Dock­er works well for any kind of web devel­op­ment with any kind of CMS or dev stack (Lar­avel, Node­JS, Rails, whatevs).

The Dock­er con­fig used here is used in both the dev​Mode​.fm GitHub repo, and in the nystudio107/​craft boil­er­plate Com­pos­er project if you want to see some in the wild” examples.

The Dock­er con­fig on its own can be found at nys­tu­dio107/­dock­er-images, and the pre-built base images are up on Dock­er­Hub.

The base images are all mul­ti-arch, so you folks using Apple Sil­i­con M1 proces­sors will get native images, too.

Link Why Docker?

If you’re doing fron­tend web devel­op­ment, you very like­ly already have some kind of a local devel­op­ment environment. 

So why should you use Docker instead?

This is a very rea­son­able ques­tion to ask, because any kind of switch of tool­ing requires some upskilling, and some work.

Docker containers frontend web development

I’ve long been using Home­stead—which is real­ly just a cus­tom Vagrant box with some extras — as my local dev envi­ron­ment as dis­cussed in the Local Devel­op­ment with Vagrant / Home­stead article.

I’d cho­sen to use Home­stead because I want­ed a local dev envi­ron­ment that was deter­min­is­tic, dis­pos­able, and sep­a­rat­ed my devel­op­ment envi­ron­ment from my actu­al computer.

Local dev comparison docker homestead

Local devel­op­ment comparison

Dock­er has all of these advan­tages, but also a much more light­weight approach. Here are the advan­tages of Dock­er for me:

  • Each appli­ca­tion has exact­ly the envi­ron­ment it needs to run, includ­ing spe­cif­ic ver­sions of any of the plumb­ing need­ed to get it to work (PHP, MySQL, Post­gres, whatever)
  • Onboard­ing oth­ers becomes triv­ial, all they need to do is install Dock­er and type docker-compose up and away they go
  • Your devel­op­ment envi­ron­ment is entire­ly dis­pos­able; if some­thing goes wrong, you just delete it and fire up a new one
  • Your local com­put­er is sep­a­rate from your devel­op­ment envi­ron­ment, so switch­ing com­put­ers is triv­ial, and you won’t run into issues where you hose your com­put­er or are stuck with con­flict­ing ver­sions of devops services
  • The cost of try­ing dif­fer­ent ver­sions of var­i­ous ser­vices is low; just change a num­ber in a .yaml file, docker-compose up, and away you go

There are oth­er advan­tages as well, but these are the more impor­tant ones for me.

Addi­tion­al­ly, con­tainer­iz­ing your appli­ca­tion in local devel­op­ment is a great first step to using a con­tainer­ized deploy­ment process, and run­ning Dock­er in pro­duc­tion as well.

A dis­ad­van­tage with any kind of vir­tu­al­iza­tion is per­for­mance, but that can be mit­i­gat­ed by hav­ing mod­ern hard­ware, a bunch of mem­o­ry, and opti­miz­ing Dock­er via the Per­for­mance Tun­ing Dock­er for Mac article.

Link Understanding Docker

This arti­cle is not a com­pre­hen­sive tuto­r­i­al on Dock­er, but I will attempt to explain some of the more impor­tant, broad­er concepts.

Dock­er has the notion of con­tain­ers, each of which run one or more ser­vices. You can think of each con­tain­er as a mini vir­tu­al machine (even though tech­ni­cal­ly, they are not).

While you can run mul­ti­ple ser­vices in a sin­gle Dock­er con­tain­er, sep­a­rat­ing each ser­vice out into a sep­a­rate con­tain­er has many advantages.

Docker containers separate services

If PHP, Apache, and MySQL are all in sep­a­rate con­tain­ers, they won’t affect each oth­er, and also can be more eas­i­ly swapped in and out.

If you decide you want to use Nginx or Post­gres instead, the decou­pling into sep­a­rate con­tain­ers makes it easy!

Dock­er con­tain­ers are built from Dock­er images, which can be thought of as a recipe for build­ing the con­tain­er, with all of the files and code need­ed to make it happen.

If a Docker image is the recipe, a Docker container is the finished resulting meal.

Dock­er images almost always are lay­ered on top of oth­er exist­ing images that they extend FROM. For instance, you might have a base image from Ubun­tu or Alpine Lin­ux which pro­vide in the nec­es­sary oper­at­ing sys­tem ker­nel lay­er for oth­er process­es like Nginx to run.

Docker image layers

Dock­er Image Layers

This lay­er­ing works thanks to the Union file sys­tem, which han­dles com­pos­ing all the lay­ers of the cake togeth­er for you.

We said ear­li­er that Dock­er is more light­weight than run­ning a full Vagrant VM, and it is… but unfor­tu­nate­ly, unless you’re run­ning Lin­ux there still is a vir­tu­al­iza­tion lay­er run­ning, which is Hyper­K­it for the Mac, and Hyper‑V for Windows.

Docker for Mac & Windows still has a virtualization layer, it’s just relatively lightweight.

For­tu­nate­ly, you don’t need to be con­cerned with any of this, but the per­for­mance impli­ca­tions do inform some of the deci­sions we’ve made in the Dock­er con­fig pre­sent­ed here.

For more infor­ma­tion on Dock­er, for that I’d high­ly rec­om­mend the Dock­er Mas­tery course (if it’s not on sale now, don’t wor­ry, it will be at some point) and also the fol­low­ing dev​Mode​.fm episodes:

…and there are tons of oth­er excel­lent edu­ca­tion­al resources on Dock­er out there such as Matt Gray’s Craft in Dock­er: Every­thing I’ve Learnt pre­sen­ta­tion, and his excel­lent A Craft CMS Devel­op­ment Work­flow With Dock­er series.

In our arti­cle, we will focus on anno­tat­ing a real-world Dock­er con­fig that’s used in pro­duc­tion. We’ll dis­cuss var­i­ous Dock­er con­cepts as we go, but the pri­ma­ry goal here is doc­u­ment­ing a work­ing config.

This article is what I wished existed when I started learning Docker

I learn best by look­ing at a work­ing exam­ple, and pick­ing it apart. If you do, too, let’s get going!

Link xdebug performance

Before we delve into the Dock­er set­up, a quick dis­cus­sion on xde­bug is in order.

Xde­bug is a tool that allows you to debug your PHP code by set­ting break­points, inspect­ing vari­ables, pro­fil­ing code, and so on. It’s vital when you need it, but it also slows things down when you don’t.

xdebug is crucial for PHP development, but it’s also slow

Most of the time we don’t need xde­bug, but the over­head of mere­ly hav­ing xde­bug installed can slow down fron­tend requests. There are ways you can dis­able xde­bug via envi­ron­ment vari­able (and oth­er meth­ods), but they usu­al­ly require rebuild­ing your container.

Addi­tion­al­ly, just hav­ing xde­bug installed adds over­head. I was research­ing this conun­drum (whilst also re-eval­u­at­ing my life) when I dis­cov­ered the arti­cle Devel­op­ing at Full Speed with Xde­bug.

Essen­tial­ly what we do is just have two PHP con­tain­ers run­ning, one that’s our devel­op­ment envi­ron­ment with xde­bug installed, the oth­er that is our pro­duc­tion envi­ron­ment with­out our debug­ging tools.

Nginx xdebug session cookie php performance

Dual PHP con­tain­ers for xdebug

What hap­pens is a request comes in, Nginx looks to see if there’s an XDEBUG_SESSION or XDEBUG_PROFILE cook­ie set. If there’s no cook­ie, it routes the request to the reg­u­lar php container.

If how­ev­er the XDEBUG_SESSION or XDEBUG_PROFILE cook­ie is set (with any val­ue), it routes the request to the php_xdebug container.

You can set this cook­ie with a brows­er exten­sion, your IDE, or via a num­ber of oth­er meth­ods. Here is the Xde­bug Helper brows­er exten­sion for your favorite browsers: Chrome — Fire­fox — Safari

Here’s a video of it in action:

This ele­gant solu­tion allows us to devel­op at full speed using Dock­er & PHP, while also hav­ing xde­bug instant­ly avail­able if we need it.

🔥

Link Alpine Linux

When I orig­i­nal­ly cre­at­ed my Dock­er set­up, I used the default Ubun­tu images, because I was famil­iar with Ubun­tu, and I’d been told that I’d have few­er issues get­ting things up and running.

This was all true, but I decid­ed to go in and refac­tor all of my images to be based off of Alpine Lin­ux, which is a ver­sion of Lin­ux that stress­es small image sizes, and effi­cien­cy. Here’s what it looks like con­vert­ed over:

Ubuntu to alpine image size

Ubun­tu vs. Alpine image size

N.B.: the sizes above refer only to the space on disk, the images don’t use up this much mem­o­ry when in use. On my lap­top, I have 2gb allo­cat­ed to Dock­er, and the mem­o­ry usage looks some­thing like this:

Docker memory usage

Mem­o­ry usage out of 2gb RAM con­fig­ured to Docker

Hav­ing small­er Dock­er images means that they take less time to down­load, they take up less disk space, and are in gen­er­al more efficient.

And they are more in line with the Dock­er bring only what you need” philosophy.

Link My Docker Directory Structure

This Dock­er set­up uses a direc­to­ry struc­ture that looks like this (don’t wor­ry, it’s not as com­plex as it seems, many of the Dock­er images here are for ref­er­ence only, and are actu­al­ly pre-built):

├── buddy.yml
├── buildchain
│   ├── package.json
│   ├── package-lock.json
│   ├── postcss.config.js
│   ├── tailwind.config.js
│   ├── webpack.common.js
│   ├── webpack.dev.js
│   ├── webpack.prod.js
│   └── webpack.settings.js
├── CHANGELOG.md
├── cms
│   ├── composer.json
│   ├── composer.lock
│   ├── config
│   ├── craft
│   ├── craft.bat
│   ├── example.env
│   ├── modules
│   ├── storage
│   ├── templates
│   ├── vendor
│   └── web
├── db-seed
│   ├── db_seed.sql
├── docker-compose.yml
├── docker-config
│   ├── mariadb
│   │   └── Dockerfile
│   ├── nginx
│   │   ├── default.conf
│   │   └── Dockerfile
│   ├── node-dev-base
│   │   └── Dockerfile
│   ├── node-dev-webpack
│   │   └── Dockerfile
│   ├── php-dev-base
│   │   ├── Dockerfile
│   │   ├── xdebug.ini
│   │   └── zzz-docker.conf
│   ├── php-dev-craft
│   │   └── Dockerfile
│   ├── php-prod-base
│   │   ├── Dockerfile
│   │   └── zzz-docker.conf
│   ├── php-prod-craft
│   │   ├── Dockerfile
│   │   └── run_queue.sh
│   ├── postgres
│   │   └── Dockerfile
│   └── redis
│       └── Dockerfile
├── migrations
├── scripts
│   ├── common
│   ├── docker_prod_build.sh
│   ├── docker_pull_db.sh
│   ├── docker_restore_db.sh
│   └── example.env.sh
├── src
│   ├── conf
│   ├── css
│   ├── img
│   ├── js
│   ├── php
│   ├── templates -> ../cms/templates
│   └── vue
└── tsconfig.json

Here’s an expla­na­tion of what the top-lev­el direc­to­ries are:

  • cms — every­thing need­ed to run Craft CMS. The is the app” of the project
  • docker-config — an indi­vid­ual direc­to­ry for each ser­vice that the Dock­er set­up uses, with a Dockerfile and oth­er ancil­lary con­fig files therein
  • scripts — helper shell scripts that do things like pull a remote or local data­base into the run­ning Dock­er con­tain­er. These are derived from the Craft-Scripts shell scripts
  • src — the fron­tend JavaScript, CSS, Vue, etc. source code that the project uses

Each ser­vice is ref­er­enced in the docker-compose.yaml file, and defined in the Dockerfile that is in the cor­re­spond­ing direc­to­ry in the docker-config/ directory.

It isn’t strict­ly nec­es­sary to have a sep­a­rate Dockerfile for each ser­vice, if they are just derived from a base image. But I like the con­sis­ten­cy, and ease of future expan­sion should some­thing cus­tom be nec­es­sary down the road.

You’ll also notice that there are php-dev-base and php-dev-craft direc­to­ries, as well as node-dev-base and node-dev-webpack direc­to­ries, and might be won­der­ing why they aren’t consolidated.

The rea­son is that there’s a whole lot of the base set­up in both that just nev­er changes, so instead of rebuild­ing that each time, we can build it once and pub­lish the images up on Dock​er​Hub​.com as nys­tu­dio107/php-dev-base and nys­tu­dio107/n­ode-dev-base.

Then we can lay­er any­thing spe­cif­ic about our project on top of these base images in the respec­tive -craft ser­vices. This saves us sig­nif­i­cant build­ing time, while keep­ing flexibility.

Link The docker-compose.yaml file

While a docker-compose.yaml file isn’t required when using Dock­er, from a prac­ti­cal point of view, you’ll almost always use it. The docker-compose.yaml file allows you to define mul­ti­ple con­tain­ers for run­ning the ser­vices you need, and coor­di­nate start­ing them up and shut­ting them down in unison.

Then all you need to do is run docker-compose up via the ter­mi­nal in a direc­to­ry that has a docker-compose.yaml file, and Dock­er will start up all of your con­tain­ers for you!

Here’s an exam­ple of what that might look like, start­ing up your Dock­er containers:

Docker compose up terminal

Exam­ple dock­er-com­pose up output

Let’s have a look at our docker-compose.yaml file:

version: '3.7'

services:
  # nginx - web server
  nginx:
    build:
      context: ./docker-config/nginx
      dockerfile: ./Dockerfile
    env_file: &env
      - ./cms/.env
    init: true
    ports:
      - "8000:80"
    volumes:
      - cpresources:/var/www/project/cms/web/cpresources:delegated
      - ./cms/web:/var/www/project/cms/web:cached
  # php - run php-fpm
  php:
    build: &php-build
      context: ./docker-config/php-prod-craft
      dockerfile: ./Dockerfile
    depends_on:
      - "mariadb"
      - "redis"
    env_file:
      *env
    expose:
      - "9000"
    init: true
    volumes: &php-volumes
      - cpresources:/var/www/project/cms/web/cpresources:delegated
      - storage:/var/www/project/cms/storage:delegated
      - ./cms:/var/www/project/cms:cached
      # Specific directories that need to be bind-mounted
      - ./cms/storage/logs:/var/www/project/cms/storage/logs:delegated
      - ./cms/storage/runtime/compiled_templates:/var/www/project/cms/storage/runtime/compiled_templates:delegated
      - ./cms/storage/runtime/compiled_classes:/var/www/project/cms/storage/runtime/compiled_classes:delegated
      - ./cms/vendor:/var/www/project/cms/vendor:delegated
  # php - run php-fpm with xdebug
  php_xdebug:
    build:
      context: ./docker-config/php-dev-craft
      dockerfile: ./Dockerfile
    depends_on:
      - "php"
    env_file:
      *env
    expose:
      - "9000"
    init: true
    volumes:
      *php-volumes
  # queue - runs queue jobs via php craft queue/listen
  queue:
    build:
      *php-build
    command: /var/www/project/run_queue.sh
    depends_on:
      - "php"
    env_file:
      *env
    init: true
    volumes:
      *php-volumes
  # mariadb - database
  mariadb:
    build:
      context: ./docker-config/mariadb
      dockerfile: ./Dockerfile
    env_file:
      *env
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: project
      MYSQL_USER: project
      MYSQL_PASSWORD: project
    init: true
    ports:
      - "3306:3306"
    volumes:
      - db-data:/var/lib/mysql
      - ./db-seed/:/docker-entrypoint-initdb.d
  # redis - key/value database for caching & php sessions
  redis:
    build:
      context: ./docker-config/redis
      dockerfile: ./Dockerfile
    expose:
      - "6379"
    init: true
  # vite - frontend build system
  vite:
    build:
      context: ./docker-config/node-dev-vite
      dockerfile: ./Dockerfile
    env_file:
      *env
    init: true
    ports:
      - "3000:3000"
    volumes:
      - ./buildchain:/var/www/project/buildchain:cached
      - ./buildchain/node_modules:/var/www/project/buildchain/node_modules:delegated
      - ./cms/web:/var/www/project/cms/web:delegated
      - ./src:/var/www/project/src:cached
      - ./cms/templates:/var/www/project/cms/templates:cached

volumes:
  db-data:
  cpresources:
  storage:

This .yaml file has 3 top-lev­el keys:

  • version — the ver­sion num­ber of the Dock­er Com­pose file, which cor­re­sponds to dif­fer­ent capa­bil­i­ties offered by dif­fer­ent ver­sions of the Dock­er Engine
  • services — each ser­vice cor­re­sponds to a sep­a­rate Dock­er con­tain­er that is cre­at­ed using a sep­a­rate Dock­er image
  • volumes — named vol­umes that are mount­ed and can be shared amongst your Dock­er con­tain­ers (but not your host com­put­er), for stor­ing per­sis­tent data

We’ll detail each ser­vice below, but there are a few inter­est­ing tid­bits to cov­er first.

BUILD

When you’re cre­at­ing a Dock­er con­tain­er, you can either base it on an exist­ing image (either a local image or one pulled down from Dock​er​Hub​.com), or you can build it local­ly via a Dockerfile.

As men­tioned above, I chose the method­ol­o­gy that each ser­vice would be cre­at­ing as a build from a Dockerfile (all of which extend FROM an image up on Dock​er​Hub​.com) to keep things consistent.

This means that some of our Dockerfiles we use are noth­ing more than a sin­gle line, e.g.: FROM mariadb:10.3, but this set­up does allow for expansion.

The two keys used for build are:

  • context — this spec­i­fies where the work­ing direc­to­ry for the build should be, rel­a­tive to the docker-compose.yaml file. This is set to the root direc­to­ry of each service
  • dockerfile — this spec­i­fies a path to the Dockerfile to use to build the ser­vice Dock­er con­tain­er. Think of the Dockerfile as a local Dock­er image

So the con­text is always the root direc­to­ry of each ser­vice, with the Dockerfile and any sup­port­ing files for each ser­vice are off in a sep­a­rate direc­to­ry. We do it this way so that we’re not pass­ing down more than is need­ed when build­ing the Dock­er images, which slows down the build process sig­nif­i­cant­ly (thanks to Mizux Sei­ha & Patrick Har­ring­ton for point­ing this out!).

DEPENDS_ON

This just lets you spec­i­fy what oth­er ser­vices this par­tic­u­lar ser­vice depends on; this allows you to ensure that oth­er con­tain­ers are up and run­ning before this con­tain­er starts up.

ENV_FILE

The env_file set­ting spec­i­fies a path to your .env file for key/​value pairs that will be inject­ed into a Dock­er container.

Dock­er does not allow for quotes in its .env file, which is con­trary to how .env files work almost every­where else… so remove any quotes you have in your .env file.

You’ll notice that for the nginx ser­vice, there’s a strange &env val­ue in the env_file set­ting, and for the oth­er ser­vices, the set­ting is *env. This is tak­ing advan­tage of YAML alias­es, so if we do change the .env file path, we only have to do it in one place.

Doing it this way also ensures that all of the .env envi­ron­ment vari­ables are avail­able in every con­tain­er. For more on envi­ron­ment vari­ables, check out the Flat Mul­ti-Envi­ron­ment Con­fig for Craft CMS 3 article.

Because it’s Dock­er that is inject­ing these .env envi­ron­ment vari­ables, if you change your .env file, you’ll need to restart your Dock­er containers.

INIT

Set­ting init: true for an image caus­es sig­nals to be for­ward­ed to the process, which allows them to ter­mi­nate quick­ly when you halt them with Control-C.

PORTS

This spec­i­fies the port that should be exposed out­side of the con­tain­er, fol­lowed by the port that the con­tain­er uses inter­nal­ly. So for exam­ple, the nginx ser­vice has "8000:80", which means the exter­nal­ly acces­si­ble port for the Nginx web­serv­er is 8000, and the inter­nal port the ser­vice runs on is 80.

If this sounds con­fus­ing, under­stand that Dock­er uses its own inter­nal net­work to allow con­tain­ers to talk to each oth­er, as well as the out­side world.

VOL­UMES

Dock­er con­tain­ers run in their own lit­tle world, which is great for iso­la­tion pur­pos­es, but at some point you do need to share things from your host” com­put­er with the Dock­er container.

Dock­er vol­umes allow you to do this. You spec­i­fy either a named vol­ume or a path on your host, fol­lowed by the path where this vol­ume should be bind mount­ed in the Dock­er container.

This is where per­for­mance prob­lems can hap­pen with Dock­er on the Mac and Win­dows. So we use some hints to help with per­for­mance:

  • consistent — per­fect con­sis­ten­cy (host and con­tain­er have an iden­ti­cal view of the mount at all times)
  • cached — the host’s view is author­i­ta­tive (per­mit delays before updates on the host appear in the container)
  • delegated — the container’s view is author­i­ta­tive (per­mit delays before updates on the con­tain­er appear in the host)

So for things like node_modules/ and vendor/ we mark them as :del­e­gat­ed because while we want them shared, the con­tain­er is in con­trol of mod­i­fy­ing these volumes.

Some Dock­er setups I’ve seen put these direc­to­ries into a named vol­ume, which means they are vis­i­ble only to the Dock­er containers.

But the prob­lem is we lose out on our edi­tor auto-com­ple­tion, because our edi­tor has noth­ing to index.

This is a non-negotiable for me

See the Auto-Com­plete Craft CMS 3 APIs in Twig with Php­Storm arti­cle for details.

Link Service: Nginx

Nginx is the web serv­er of choice for me, both in local dev and in production. 

FROM nginx:1.19-alpine

COPY ./default.conf /etc/nginx/conf.d/default.conf

We’ve based the con­tain­er on the nginx image, tagged at ver­sion 1.19

The only mod­i­fi­ca­tion it makes is COPYing our default.conf file into place:

# default Docker DNS server
resolver 127.0.0.11;

# If a cookie doesn't exist, it evaluates to an empty string, so if neither cookie exists, it'll match :
# (empty string on either side of the :), but if either or both cookies are set, it won't match, and will hit the default rule
map $cookie_XDEBUG_SESSION:$cookie_XDEBUG_PROFILE $my_fastcgi_pass {
    default php_xdebug;
    ':' php;
}

server {
    listen 80;
    listen [::]:80;

    server_name _;
    root /var/www/project/cms/web;
    index index.html index.htm index.php;
    charset utf-8;

    gzip_static  on;

    ssi on;

    client_max_body_size 0;

    error_page 404 /index.php?$query_string;

    access_log off;
    error_log /dev/stdout info;

    location = /favicon.ico { access_log off; log_not_found off; }

    location / {
        try_files $uri/index.html $uri $uri/ /index.php?$query_string;
    }

    location ~ [^/]\.php(/|$) {
        try_files $uri $uri/ /index.php?$query_string;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass $my_fastcgi_pass:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        fastcgi_param DOCUMENT_ROOT $realpath_root;
        fastcgi_param HTTP_PROXY "";

        add_header Last-Modified $date_gmt;
        add_header Cache-Control "no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0";
        if_modified_since off;
        expires off;
        etag off;

        fastcgi_intercept_errors off;
        fastcgi_buffer_size 16k;
        fastcgi_buffers 4 16k;
        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;
    }
}

This is just a sim­ple Nginx con­fig that works well with Craft CMS. You can find more about Nginx con­figs for Craft CMS in the nginx-craft GitHub repo.

The only real mag­ic” here is our map directive:

# If a cookie doesn't exist, it evaluates to an empty string, so if neither cookie exists, it'll match :
# (empty string on either side of the :), but if either or both cookies are set, it won't match, and will hit the default rule
map $cookie_XDEBUG_SESSION:$cookie_XDEBUG_PROFILE $my_fastcgi_pass {
    default php_xdebug;
    ':' php;
}

This just sets the $my_fastcgi_pass vari­able to php if there is no XDEBUG_SESSION or XDEBUG_PROFILE cook­ie set, oth­er­wise it sets it to php_xdebug

We use this vari­able lat­er on in the con­fig file:

        fastcgi_pass $my_fastcgi_pass:9000;

This is what allows the rout­ing of debug requests to the right con­tain­er, for per­for­mance reasons.

Link Service: MariaDB

Mari­aDB is a drop-in replace­ment for MySQL that I tend to use instead of MySQL itself. It was writ­ten by the orig­i­nal author of MySQL, and is bina­ry com­pat­i­ble with MySQL.

FROM yobasystems/alpine-mariadb:10.4.15

We’ve based the con­tain­er on the mari­adb image, tagged at ver­sion 10.4.15

There’s no mod­i­fi­ca­tion at all to the source image.

When the con­tain­er is start­ed for the first time, it will exe­cute files with exten­sions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d so we can use this to seed the ini­tial data­base. See Ini­tial­iz­ing a fresh instance.

Link Service: Postgres

Post­gres is a robust data­base that I am using more and more for Craft CMS projects. It’s not used in the docker-compose.yaml pre­sent­ed here, but I keep the con­fig­u­ra­tion around in case I want to use it.

Post­gres is used in local dev and in pro­duc­tion on the dev​Mode​.fm GitHub repo, if you want to see it implemented.

FROM postgres:12.2

We’ve based the con­tain­er on the post­gres image, tagged at ver­sion 12.2

There’s no mod­i­fi­ca­tion at all to the source image.

When the con­tain­er is start­ed for the first time, it will exe­cute files with exten­sions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d so we can use this to seed the ini­tial data­base. See Ini­tial­iza­tion scripts.

Link Service: Redis

Redis is a key/​value pair data­base that I set all of my Craft CMS installs to use both as a caching method, and as a ses­sion han­dler for PHP.

FROM redis:5-alpine

We’ve based the con­tain­er on the redis image, tagged at ver­sion 5

There’s no mod­i­fi­ca­tion at all to the source image.

Link Service: php

PHP is the lan­guage that the Yii2 frame­work and Craft CMS itself is based on, so we need it in order to run our app.

This is the PHP con­tain­er that is used for reg­u­lar web requests, so it does not include xde­bug for per­for­mance reasons.

This ser­vice is com­posed of a base image that con­tains all of the pack­ages and PHP exten­sions we’ll always need to use, and then the project-spe­cif­ic image that con­tains what­ev­er addi­tion­al things are need­ed for our project.

FROM php:8.0-fpm-alpine

# dependencies required for running "phpize"
# these get automatically installed and removed by "docker-php-ext-*" (unless they're already installed)
ENV PHPIZE_DEPS \
		autoconf \
		dpkg-dev \
		dpkg \
		file \
		g++ \
		gcc \
		libc-dev \
		make \
		pkgconf \
		re2c \
		wget

# Install packages
RUN set -eux; \
    # Packages needed only for build
	apk add --no-cache --virtual .build-deps \
		$PHPIZE_DEPS \
    && \
    # Packages to install
    apk add --no-cache \
        bzip2-dev \
        ca-certificates \
        curl \
        fcgi \
        freetype-dev \
        gettext-dev \
        icu-dev \
        imagemagick \
        imagemagick-dev \
        libjpeg-turbo-dev \
        libmcrypt-dev \
        libpng \
        libpng-dev \
        libressl-dev \
        libtool \
        libxml2-dev \
        libzip-dev \
        oniguruma-dev \
        unzip \
    && \
    # pecl PHP extensions
    pecl install \
        imagick-3.4.4 \
        redis \
    && \
    # Configure PHP extensions
    docker-php-ext-configure \
        gd --with-freetype=/usr/include/ --with-jpeg=/usr/include/ \
    && \
    # Install PHP extensions
    docker-php-ext-install \
        bcmath \
        bz2 \
        exif \
        ftp \
        gettext \
        gd \
        iconv \
        intl \
        mbstring \
        opcache \
        pdo \
        shmop \
        sockets \
        sysvmsg \
        sysvsem \
        sysvshm \
        zip \
    && \
    # Enable PHP extensions
    docker-php-ext-enable \
        imagick \
        redis \
    && \
    # Remove the build deps
    apk del .build-deps \
    && \
    # Clean out directories that don't need to be part of the image
    rm -rf /tmp/* /var/tmp/*

# https://github.com/docker-library/php/issues/240
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/edge/community/ gnu-libiconv
ENV LD_PRELOAD /usr/lib/preloadable_libiconv.so php

# Copy the `zzz-docker-php.ini` file into place for php
COPY zzz-docker-php.ini /usr/local/etc/php/conf.d/

# Copy the `zzz-docker-php-fpm.conf` file into place for php-fpm
COPY zzz-docker-php-fpm.conf /usr/local/etc/php-fpm.d/

We’ve based the con­tain­er on the php image, tagged at ver­sion 8.0

We’re then adding a bunch of pack­ages that we want avail­able for our Alpine oper­at­ing sys­tem base, some debug­ging tools, as well as some PHP exten­sions that Craft CMS requires.

Then we copy into place the zzz-docker-php.conf file:

[www]
pm.max_children = 10
pm.process_idle_timeout = 30s
pm.max_requests = 1000

This just sets some defaults for php-fpm that make sense for local development.

Then we copy into place the zzz-docker-php-fpm.ini file with some sane defaults for local Craft CMS development:

[php]
memory_limit=256M
max_execution_time=300
max_input_time=300
max_input_vars=5000
upload_max_filesize=100M
post_max_size=100M
[opcache]
opcache.enable=1
opcache.revalidate_freq=0
opcache.validate_timestamps=1

By itself, this image won’t do much for us, and in fact we don’t even spin up this image. But we’ve built this image, and made it avail­able as nys­tu­dio107/php-prod-base on DockerHub.

Since it’s pre-built, we don’t have to build it every time, and can lay­er on top of this image any­thing project-spe­cif­ic via the php-prod-craft con­tain­er image:

FROM nystudio107/php-prod-base:8.0-alpine

# dependencies required for running "phpize"
# these get automatically installed and removed by "docker-php-ext-*" (unless they're already installed)
ENV PHPIZE_DEPS \
        autoconf \
        dpkg-dev \
        dpkg \
        file \
        g++ \
        gcc \
        libc-dev \
        make \
        pkgconf \
        re2c \
        wget

# Install packages
RUN set -eux; \
    # Packages needed only for build
    apk add --no-cache --virtual .build-deps \
        $PHPIZE_DEPS \
    && \
    # Packages to install
    apk add --no-cache \
        su-exec \
        gifsicle \
        jpegoptim \
        libwebp-tools \
        nano \
        optipng \
        mysql-client \
    && \
    # Install PHP extensions
    docker-php-ext-install \
        pdo_mysql \
    && \
    # Install Composer
    curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer \
    && \
    # Remove the build deps
    apk del .build-deps \
    && \
    # Clean out directories that don't need to be part of the image
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

WORKDIR /var/www/project

COPY ./run_queue.sh .
RUN chmod a+x run_queue.sh \
    && \
    mkdir -p /var/www/project/cms/storage \
    && \
    mkdir -p /var/www/project/cms/web/cpresources \
    && \
    chown -R www-data:www-data /var/www/project
COPY ./composer_install.sh .
RUN chmod a+x composer_install.sh

# Run the composer_install.sh script that will do a `composer install`:
# - If `composer.lock` is missing
# - If `vendor/` is missing
# ...then start up php-fpm. The `run_queue.sh` script in the queue container
# will take care of running any pending migrations and apply any Project Config changes,
# as well as set permissions via an async CLI process
CMD ./composer_install.sh \
    && \
    php-fpm

This is the image that we actu­al build into a con­tain­er, and use for our project. We install the nano edi­tor because I find it handy some­times, and we also install pdo_mysql so that PHP can con­nect to our Mari­aDB database.

We do it this way so that if we want to cre­ate a Craft CMS project that uses Post­gres, we can just swap in the PDO exten­sion need­ed here.

Then we make sure the var­i­ous storage/ and cpresources/ direc­to­ries are in place, with the right own­er­ship so that Craft will run properly.

Then do a composer install every time the Dock­er con­tain­er is start­ed up. While this takes a lit­tle more time, it makes things a whole lot eas­i­er when work­ing with teams or on mul­ti­ple environments.

We have to do the composer install as part of the Dock­er image CMD because the file sys­tem mounts aren’t in place until the CMD is run.

This allows us to update our Com­pos­er depen­den­cies just by delet­ing the composer.lock file, and doing docker-compose up

Sim­ple.

The alter­na­tive is doing a docker exec -it craft_php_1 /bin/bash to open up a shell in our con­tain­er, and run­ning the com­mand man­u­al­ly. Which is fine, but a lit­tle con­vo­lut­ed for some.

This con­tain­er runs the shell script composer_install.sh when it starts up, to do a composer install if composer.lock or vendor/ is not present:

#!/bin/bash

# Composer Install shell script
#
# This shell script runs `composer install` if either the `composer.lock` file or
# the `vendor/` directory is not present`
#
# @author    nystudio107
# @copyright Copyright (c) 2022 nystudio107
# @link      https://nystudio107.com/
# @license   MIT

# Ensure permissions on directories Craft needs to write to
chown -R www-data:www-data /var/www/project/cms/storage
chown -R www-data:www-data /var/www/project/cms/web/cpresources
# Check for `composer.lock` & `vendor/`
cd /var/www/project/cms
if [ ! -f "composer.lock" ] || [ ! -d "vendor" ]; then
    su-exec www-data composer install --verbose --no-progress --no-scripts --optimize-autoloader --no-interaction
    # Wait until the MySQL db container responds
    echo "### Waiting for MySQL database"
    until eval "mysql -h mysql -u $DB_USER -p$DB_PASSWORD $DB_DATABASE -e 'select 1' > /dev/null 2>&1"
    do
      sleep 1
    done
    # Run any pending migrations/project config changes
    su-exec www-data composer craft-update
fi

The composer.json scripts:

{
  "require": {
    "craftcms/cms": "^3.4.0",
    "vlucas/phpdotenv": "^3.4.0",
    "yiisoft/yii2-redis": "^2.0.6",
    "nystudio107/craft-autocomplete": "^1.0.0",
    "nystudio107/craft-imageoptimize": "^1.0.0",
    "nystudio107/craft-fastcgicachebust": "^1.0.0",
    "nystudio107/craft-minify": "^1.2.5",
    "nystudio107/craft-typogrify": "^1.1.4",
    "nystudio107/craft-retour": "^3.0.0",
    "nystudio107/craft-seomatic": "^3.2.0",
    "nystudio107/craft-webperf": "^1.0.0",
    "nystudio107/craft-twigpack": "^1.1.0"
  },
  "autoload": {
    "psr-4": {
      "modules\\sitemodule\\": "modules/sitemodule/src/"
    }
  },
  "config": {
    "sort-packages": true,
    "optimize-autoloader": true,
  },
  "scripts": {
    "craft-update": [
      "@pre-craft-update",
      "@post-craft-update"
    ],
    "pre-craft-update": [
    ],
    "post-craft-update": [
      "@php craft install/check && php craft clear-caches/all --interactive=0 || exit 0",
      "@php craft install/check && php craft migrate/all --interactive=0 || exit 0",
      "@php craft install/check && php craft project-config/apply --interactive=0 || exit 0"
    ],
    "post-root-package-install": [
      "@php -r \"file_exists('.env') || copy('.env.example', '.env');\""
    ],
    "post-create-project-cmd": [
      "@php craft setup/welcome"
    ],
    "pre-update-cmd": "@pre-craft-update",
    "pre-install-cmd": "@pre-craft-update",
    "post-update-cmd": "@post-craft-update",
    "post-install-cmd": "@post-craft-update"
  }
}

So the craft-update script runs two oth­er scripts, pre-craft-update & post-craft-update. These auto­mat­i­cal­ly do the fol­low­ing when our con­tain­er starts up:

Start­ing from a clean slate like this is so help­ful in terms of avoid­ing sil­ly prob­lems like things being cached, not up to date, etc.

Link Service: php_xdebug

The php_xdebug con­tain­er is very sim­i­lar to the pre­vi­ous php con­tain­er, but it includes xde­bug so that we can do seri­ous PHP debug­ging when we need it.

Requests get rout­ed by Nginx to this con­tain­er auto­mat­i­cal­ly if the XDEBUG_SESSION cook­ie is set.

FROM nystudio107/php-prod-base:8.0-alpine

# dependencies required for running "phpize"
# these get automatically installed and removed by "docker-php-ext-*" (unless they're already installed)
ENV PHPIZE_DEPS \
        autoconf \
        dpkg-dev \
        dpkg \
        file \
        g++ \
        gcc \
        libc-dev \
        make \
        pkgconf \
        re2c \
        wget

# Install packages
RUN set -eux; \
    # Packages needed only for build
    apk add --no-cache --virtual .build-deps \
        $PHPIZE_DEPS \
    && \
    # pecl PHP extensions
    pecl install \
        xdebug-3.0.2 \
    && \
    # Enable PHP extensions
    docker-php-ext-enable \
        xdebug \
    && \
    # Remove the build deps
    apk del .build-deps \
    && \
    # Clean out directories that don't need to be part of the image
    rm -rf /tmp/* /var/tmp/*

# Copy the `xdebug.ini` file into place for xdebug
COPY ./xdebug.ini /usr/local/etc/php/conf.d/xdebug.ini

# Copy the `zzz-docker-php.ini` file into place for php
COPY zzz-docker-php.ini /usr/local/etc/php/conf.d/

# Copy the `zzz-docker-php-fpm.conf` file into place for php-fpm
COPY zzz-docker-php-fpm.conf /usr/local/etc/php-fpm.d/

We’ve based the image on the nys­tu­dio107/php-prod-base image described above, tagged at ver­sion 8.0

We’re then adding in the xde­bug 3 PHP exten­sion, and copy­ing into place the xdebug.ini file:

xdebug.mode=debug
xdebug.start_with_request=yes
xdebug.client_host=host.docker.internal

…and copy into place the zzz-docker.conf file:

[www]
pm.max_children = 10
pm.process_idle_timeout = 10s
pm.max_requests = 1000

This just sets some defaults for php-fpm that make sense for local development.

By itself, this image won’t do much for us, and in fact we don’t even spin up this image. But we’ve built this image, and made it avail­able as nys­tu­dio107/php-dev-base on DockerHub.

Since it’s pre-built, we don’t have to build it every time, and can lay­er on top of this image any­thing project-spe­cif­ic via the php-dev-craft con­tain­er image:

FROM nystudio107/php-dev-base:8.0-alpine

# dependencies required for running "phpize"
# these get automatically installed and removed by "docker-php-ext-*" (unless they're already installed)
ENV PHPIZE_DEPS \
        autoconf \
        dpkg-dev \
        dpkg \
        file \
        g++ \
        gcc \
        libc-dev \
        make \
        pkgconf \
        re2c \
        wget

# Install packages
RUN set -eux; \
    # Packages needed only for build
    apk add --no-cache --virtual .build-deps \
        $PHPIZE_DEPS \
    && \
    # Packages to install
    apk add --no-cache \
        su-exec \
        gifsicle \
        jpegoptim \
        libwebp-tools \
        nano \
        optipng \
        mysql-client \
    && \
    # Install PHP extensions
    docker-php-ext-install \
        pdo_mysql \
    && \
    # Install Composer
    curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin/ --filename=composer \
    && \
    # Remove the build deps
    apk del .build-deps \
    && \
    # Clean out directories that don't need to be part of the image
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

WORKDIR /var/www/project

RUN mkdir -p /var/www/project/cms/storage \
    && \
    mkdir -p /var/www/project/cms/web/cpresources \
    && \
    chown -R www-data:www-data /var/www/project

WORKDIR /var/www/project/cms

# Start php-fpm
CMD php-fpm

This is all anal­o­gous to what we do for the reg­u­lar php con­tain­er described ear­li­er, except that we don’t go through the steps of installing the lat­est Com­pos­er pack­ages via composer install, because our reg­u­lar php con­tain­er takes care of this for us.

This con­tain­er exists pure­ly to field the rare requests in which we need xde­bug to debug or pro­file our code. When we do, we just need to con­fig­ure our debug­ger to use port 9003 and away we go!

Phpstorm xdebug port

Php­Storm xde­bug con­fig using port 9003

The major­i­ty of the time, this con­tain­er site idle doing noth­ing, but it’s avail­able when we need it for debug­ging purposes.

Link Service: Queue

The Queue ser­vice is an exact copy of the PHP ser­vice, and it makes lib­er­al uses of YAML Alias­es to use the same set­tings as the PHP service.

The only dif­fer­ence is the addi­tion of com­mand: ./craft queue/listen 10 which is what gets exe­cut­ed when the con­tain­er is spun up.

The pur­pose of the Queue ser­vice is sim­ply to run any back­ground queue jobs effi­cient­ly, as dis­cussed in the Robust queue job han­dling in Craft CMS runner.

We just need to set the config/general.php set­ting run­QueueAu­to­mat­i­cal­ly to false so that queue jobs are not run via web request anymore.

It waits until the MySQL con­tain­er is up and run­ning, and also that the composer install has fin­ished before start­ing up the queue listener:

#!/bin/bash

# Run Queue shell script
#
# This shell script runs the Craft CMS queue via `php craft queue/listen`
# It waits until the database container responds, then runs any pending
# migrations / project config changes via the `craft-update` Composer script,
# then runs the queue listener that listens for and runs pending queue jobs
#
# @author    nystudio107
# @copyright Copyright (c) 2022 nystudio107
# @link      https://nystudio107.com/
# @license   MIT

cd /var/www/project/cms
# Wait until the MySQL db container responds
echo "### Waiting for MySQL database"
until eval "mysql -h mysql -u $DB_USER -p$DB_PASSWORD $DB_DATABASE -e 'select 1' > /dev/null 2>&1"
do
  sleep 1
done
# Wait until the `composer install` is done by looking for the `vendor/autoload.php` file
echo "### Waiting for vendor/autoload.php"
while [ ! -f vendor/autoload.php ]
do
  sleep 1
done
# Ensure permissions on directories Craft needs to write to
chown -R www-data:www-data /var/www/project/cms/storage
chown -R www-data:www-data /var/www/project/cms/web/cpresources
# Run any pending migrations/project config changes
su-exec www-data composer craft-update
# Run a queue listener
su-exec www-data php craft queue/listen 10

Link Service: webpack

web­pack is the build tool that we use for build­ing the CSS, JavaScript, and oth­er parts of our application.

The set­up used here is entire­ly based on the An Anno­tat­ed web­pack 4 Con­fig for Fron­tend Web Devel­op­ment arti­cle, just with some set­tings tweaked.

That means our web­pack build process runs entire­ly inside of a Dock­er con­tain­er, but we still get all of the Hot Mod­ule Replace­ment good­ness for local development.

This ser­vice is com­posed of a base image that con­tains node itself, all of the Debian pack­ages need­ed for head­less Chrome, the npm pack­ages we’ll always need to use, and then the project-spe­cif­ic image that con­tains what­ev­er addi­tion­al things are need­ed for our project.

FROM node:16-alpine

# Install packages for headless chrome
RUN apk update \
    && \
    apk add --no-cache nmap \
    && \
    echo @edge http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories \
    && \
    echo @edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories \
    && \
    apk update \
    && \
    apk add --no-cache \
        # Packages needed for npm install of mozjpeg & cwebp, can't --virtual and apk del later
        # Pre-builts do not work on alpine for either:
        # ref: https://github.com/imagemin/imagemin/issues/168
        # ref: https://github.com/imagemin/cwebp-bin/issues/27
        autoconf \
        automake \
        build-base \
        g++ \
        gcc \
        glu \
        libc6-compat \
        libtool \
        libpng-dev \
        libxxf86vm \
        make \
        nasm \
        # Misc packages
        nano \
        # Image optimization packages
        gifsicle \
        jpegoptim \
        libpng-dev \
        libwebp-tools \
        libjpeg-turbo-dev \
        libjpeg-turbo-utils \
        optipng \
        pngquant \
        # Headless Chrome packages
        chromium \
        harfbuzz \
        "freetype>2.8" \
        ttf-freefont \
        nss

ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
ENV CHROME_BIN /usr/bin/chromium-browser
ENV LIGHTHOUSE_CHROMIUM_PATH /usr/bin/chromium-browser

We’ve based the con­tain­er on the node image, tagged at ver­sion 16

We’re then adding the pack­ages that we need in order to get head­less Chrome work­ing (need­ed for Crit­i­cal CSS gen­er­a­tion), as well as oth­er libraries for the Sharp image library to work effectively.

By itself, this image won’t do much for us, and in fact we don’t even spin up this image. But we’ve built this image, and made it avail­able as nys­tu­dio107/n­ode-dev-base on DockerHub.

Since it’s pre-built, we don’t have to build it every time, and can lay­er on top of this image any­thing project-spe­cif­ic via the node-dev-craft con­tain­er image:

FROM nystudio107/node-dev-base:16-alpine

WORKDIR /var/www/project/

COPY ./npm_install.sh .
RUN chmod a+x npm_install.sh

# Run our webpack build in debug mode

# We'd normally use `npm ci` here, but by using `install`:
# - If `package-lock.json` is present, it will install what is in the lock file
# - If `package-lock.json` is missing, it will update to the latest dependencies
#   and create the `package-lock-json` file
# This automatic running adds to the startup overhead of `docker-compose up`
# but saves far more time in not having to deal with out of sync versions
# when working with teams or multiple environments
CMD export CPPFLAGS="-DPNG_ARM_NEON_OPT=0" \
    && \
    ./npm_install.sh \
    && \
    cd /var/www/project/buildchain/ \
    && \
    npm run dev

Then, just like with the php-dev-craft image, this does a npm install every time the Dock­er con­tain­er is cre­at­ed. While this adds some time, it saves far more in keep­ing every­one on the team or in mul­ti­ple envi­ron­ments in sync.

We have to do the npm install as part of the Dock­er image CMD because the file sys­tem mounts aren’t in place until the CMD is run.

This allows us to update our npm depen­den­cies just by delet­ing the package-lock.json file, and doing docker-compose up

The alter­na­tive is doing a docker exec -it craft_webpack_1 /bin/bash to open up a shell in our con­tain­er, and run­ning the com­mand manually.

This con­tain­er runs the shell script npm_install.sh when it starts up, to do an npm install if package-lock.json or node_modules/ is not present:

#!/bin/bash

# NPM Install shell script
#
# This shell script runs `npm install` if either the `package-lock.json` file or
# the `node_modules/` directory is not present`
#
# @author    nystudio107
# @copyright Copyright (c) 2022 nystudio107
# @link      https://nystudio107.com/
# @license   MIT

cd /var/www/project/buildchain
if [ ! -f "package-lock.json" ] || [ ! -d "node_modules" ]; then
    npm install
fi

Link All Aboard!

Hope­ful­ly this anno­tat­ed Dock­er con­fig has been use­ful to you. If you use Craft CMS, you can dive in and start using it your­self; if you use some­thing else entire­ly, the con­cepts here should still be very salient for your project.

Docker containers local development

I think that Dock­er — or some oth­er con­cep­tu­al­ly sim­i­lar con­tainer­iza­tion strat­e­gy — is going to be an impor­tant tech­nol­o­gy going for­ward. So it’s time to jump on board.

To see how you can make your Dock­er builds even sweet­er with make, check out the Using Make & Make­files to Auto­mate your Fron­tend Work­flow article.

As men­tioned ear­li­er, the Dock­er con­fig used here is used in both the dev​Mode​.fm GitHub repo, and in the nystudio107/​craft boil­er­plate Com­pos­er project if you want to see some in the wild” examples.

Hap­py containerizing!