A Craft CMS Development Workflow With Docker: Part 7 - Multi Developer Workflow

As part of my Craft CMS Development Workflow With Docker series of articles I'll be covering, from start to finish, setting up a development workflow for working with Craft CMS in docker.

Git repo here.



To round out this series I'm going to move away from the technical details of building and launching a Craft CMS projects in Docker. Instead I'd like to discuss some of the things things that I've learnt about working on teams with multiple developers, how we can structure our processes and tools to accommodate them and how that fits in with the previous articles.

When working on a project with multiple developers things can get messy quickly. It's important to have both a well defined set of processes and a strong leader to enforce them to ensure everything is kept manageable. The personal qualities of a good leader are a bit beyond the scope of this article, but the processes below have served me well for a team of up to 6 developers.

We'll start with what I think is the backbone of an efficient team project.

Branching and Merging

When developers are working independently on a shared codebase it is very easy for them to make changes which collide with the work of other developers. In order to reconcile these collisions we can use git's branching and merging functionality. It's important to have a strategy around this activity though: a set of steps which accommodate rapid development whilst also protecting against common problems.

My preferred methodology is similar to GitLab Flow and at a high level its steps are:

  1. All developers work inside a single project repo - no unnecessary forking.
  2. Nobody can commit to master. Yes, that includes you.
  3. Any bug or new piece of functionality is logged as an issue.
  4. When a developer starts work on an issue they create a new branch from the latest commit on master with a name that references the issue.
  5. Once the developer has completed some or all of the work towards resolving an issue they create a merge request from their issue branch into master.
  6. The merge request is reviewed and ultimately accepted, merging the developer's changes into master. This also resolves the original linked issue (and optionally deletes the issue branch).
  7. Deployments are driven by Tags: any commits tagged with a version become candidates for staging or production deployment. These deployments can be made automatically or manually depending on the project requirements.

This workflow has several benefits which relate to the functionality we've implemented previously using docker and GitLab's CI pipelines.

  • All codebase changes have a papertrail (issue) which should define why the change has taken place.
  • Developers can work independently on their branches in isolation from what everyone else is doing.
  • Developers can regularly merge master into their issue branches to pick up the latest changes.
  • Continuous integration builds and tests can run for every issue branch.
  • Only issue branches which have passing tests can be merged into master.
  • If fast-forward only merge requests are enforced this guarantees that master will never have failing tests - this is important because master is the base for all issue branches.
  • Forcing all code changes through a merge request dramatically increases the probability of peer review occurring either during development (by sharing work-in-progress) or at the point of merging.

I have used this process on small websites where I am the only developer all the way up to the team I helped to build around the Now Music streaming platform which had 7 developers working simultaneously, often on a single codebase.

Once the above process has been established it also provides a framework for organising many of the following ideas.

The Development Environment

One problem which often occurs when working on a project with multiple developers is a collision between platforms and tooling each prefer to use. This is an especially difficult problem when a project is moved from one development team to another over time. It isn't unusual for one developer to have multiple build tools installed on their machine globally which they rely on to compile assets for a project.

This very situation has nearly reduced me to angry tears in the past as I've attempted to figure out a previous developer's tooling (and the versions of all of those tools) based on the input and output files that are stored in the repo.

We can avoid all of that be applying a blanket rule to our projects: anything required to go from a freshly cloned repo to deploying an updated version of the project is codified within the repo itself.

This is made so much easier by using docker for local development. By packaging our build tools up into a container within the repo we are able to make our build process reproducible in any environment onto which the project is cloned. There are however a few notes that I'd like to highlight that I've learnt over the last couple of years:

  • Version lock everything. Docker images, NPM packages, OS level packages. Relying on the latest version of anything will guarantee your build tools will stop working at some point - usually while you're trying to onboard a new developer.
  • OS level package managers aren't perpetually stable. NPM, docker, packagist et al. keep a copy of every version of every package that they have seen (unless removed for security purposes) so you can be confident your locked versions will not randomly break. OS package managers do not. Recently all of my project build chains > 1 year old broke because a Ubuntu package mirror that my docker images had been using was deleted. There's not much you can do about this.
  • Peer review changes to the build tools. When an unexpected error occurs with build tools, developers sometimes like to fix it for themselves and then include those changes as part of the commit for the issue they were originally tasked with. If this happens do not blindly accept the build tools changes! They could impact the velocity of the entire development team and do not have automated tests applied to them like the rest of the codebase.

Unlike build tool config, I have a personal dislike for the inclusion of editor configs in a project's repo. A developer's editor or IDE of choice is usually unrelated to any individual project and it's unlikely that a development team will want to force a specific editor onto its developers when they are working on a project. It therefore doesn't make sense to include editor configs within the project repo - they just end up creating clutter. I usually add these files to .gitignore so that individual developers can maintain their own copies locally if they wish, but they aren't inflicted on everyone else.

Database Schema

Let's talk about something a bit more Craft specific.

A common problem faced by multi-developer teams is keeping persistent storage schemas in sync. This is often an issue at the beginning of a project when the data storage format is still in flux and developers aren't able to easily communicate schema changes to everyone else on a regular basis.

The primary solution to this problem is to ensure that the project repo contains a codified representation of the data schema as it changes over time. Different frameworks have different methods of handling this, including:

  1. Don't allow any database schema changes
  2. Keep an up to date database dump in the repo
  3. Maintain a set of database migration files
  4. Maintain an up to date representation of the database schema which can be diffed against its current state and any required changes applied automatically

Craft CMS used to rely on the second and third of these options, with migration files only really being viable when combined with a plugin to generate them because they were upsettingly verbose. However, since I began writing this series of articles Craft has provided us with a alternative, which fits into the fourth category, in the form of Project Config. There are a few articles which describe the functionality and benefits of Project Config so I won't re-hash that here, but I will run through the benefits it provides for multi-developer teams and a few caveats to watch out for.

For the uninitiated, to get started with Project Config simply add the following to Craft's config/general.php:

'useProjectConfigFile' => true,

Once that's activated any schema changes made by the team's developers will be tracked within the project repo. Whenever a developer pulls some changes made by someone else, those changes will be automatically applied to their local environment by carrying out a Project Config sync. In theory this is all lovely and solves all of our schema sync problems, however there are still some bad situations that you'll undoubtedly find yourself in:

Manual merges of project.yaml

Craft does a pretty good job of maintaining a schema representation in yaml format which is checked into the repo, however it is almost guaranteed that at some point a schema change will be merged that git can't figure out with its automated merge procedures. At that point you will end up with either a few manual changes required to project.yaml to complete the merge or a completely fucked project.yaml.

If you find yourself in the latter situation and you are following the strict branching strategy mentioned above you should only encounter this when merging the latest master into your pre-merged issue branch. In that scenario you can usually sort this out by restoring the file to the remote state:

git checkout --their config/project.yaml

Performing a project config sync, and then re-applying your issue branch's schema changes manually via the Craft control panel. This is far from ideal but I've yet to find a better alternative.

Slug/handle collisions

This is a tricky one because it can sneak up on you. If developer A creates a field with a slug/handle set to "my-field" and developer B does the same but for a different purpose these changes will both be applied to project.yaml without incident. The two fields will be given different UUIDs and git will include them both without a care, however next time you execute a project config sync you'll be rewarded with an error complaining about handle collisions.

This is an unfortunate error that isn't straight forward to recover from as it's likely the field handles have also been added to your template files. Prevention is the best cure for this problem - ensure all developers are aware that using non-specific handle names might come back to bite them later.

Another useful preventative tool is to create a merge request template which forces the developer to list all of the new fields, groups, sections and sites that they have created along with the relevant handles. This then makes it easy for peer reviewers to scan the list in order to highlight any potentially dangerous names that might have been used.

Testing

I originally intended to write a little about how the testing that I've described here can be adjusted to be executed locally in response to file changes, but then P&T released a new testing framework.

I'm going to be writing more about those new things soon and how that can link into the work we've done so far. Until then I'll just leave a few tips that I always try to abide by for multi-dev projects:

  • Make sure the tests running locally are the same as those running in CI and they are using a reproducible environment.
  • Make tests as quick and easy to execute as possible from a freshly checked out repo - if you make developers take extra steps to get tests working, they won't run the tests.
  • Ensure newly onboarded developers can add tests easily (in fact this is a great way to get new devs familiar with a project's codebase).

Dynamic Branch Deployments

As demonstrated in the earlier articles in this series it is relatively easy to use our CI pipeline to build new images for our project. There's no reason why we can't extend this functionality to work with our issue branches too. By updating our .gitlab-ci.yml file to tag our images with the name of the branch that is being built we can not only create one image per branch, we can also hook that up with a dynamic routing system in order to create on-demand preview links for each of our branches.

This is the system that I have set up for a couple of client websites. It allows me to create a new branch to build out a feature which automatically gets deployed on its own preview URL if its tests pass successfully.

What Next?

I've come to the end of the articles that I originally had planned for this series. Along the way several things have changed with Craft itself which have changed the way I work with it and the things I've ended up writing about. I'll be revisiting the previous articles over the next week to get those updated with new docker images and up-to-date config files so check back on those soon. I also plan on putting all of this information together into a git repo which you can use as a starting point for any new Craft projects if you wish.

I've got a few more Craft specific articles coming soon including how to run background tasks sensibly using docker, running Craft in Kubernetes and a nice little one on cache busting.

If these articles have helped you out then feel free to leave a comment or find me on Twitter - I'm always happy to hear about what you're achieving with Craft CMS in Docker.


Read Next



2024 Goals
Write Things