Evaluating Quality Culture in your team

Ladislav Cicon
Jamf Engineering
Published in
6 min readApr 21, 2022

--

Photo by Annie Spratt on Unsplash

I often use the term Quality Culture to describe different aspects of a team’s culture impacting the quality of products. For a software team, it means behaviours and norms which are influencing the quality directly or indirectly — coding standards, analytics, testing, learning opportunities and many more. To map and understand these influences better I’m using a model — Quality Culture Model which is looking at approximately 100 different data points.

This model is one way for the team to understand its strengths and weaknesses influencing the quality of the software product. Of course, it is not complete but is a good start for teams.

Culture of Quality

Before looking into the model, let’s remind us why culture and quality culture is an important aspect for organisations. Harvard Business Review published an article in which they looked into “Culture of Quality”. This term is defined as an “environment in which employees not only follow quality guidelines but also consistently see others taking quality-focused actions, hear others talking about quality, and feel quality all around them” and resembles the quality culture we talked about at the beginning.

What is interesting about this research, is the impact of the Culture of Quality. HBR found that an average-sized multinational company with a highly developed culture of quality spends $350 million less annually fixing mistakes compared to organizations in low quantile.

Surely, we can say that companies producing software are a bit different from typical multi-national companies from the research… Or are they? Isn’t there a lot transferable to software companies?

Quality Culture Model

Explanation of the Quality Culture Model can be simplified as this — it is a set of questions which are evaluating different aspects of the team’s culture from a quality perspective. The end goal is to provide the team with an easy to digest visualisation of what improvement areas the team should investigate. The model consists of three parts — questions, scoring mechanism and visualization of the result.

In the end, it is quite simple but there is a lot behind it. The model was created based on the work of the Modern Testing community and includes contributions from many practitioners. The model is kept open-source and is open for further contribution. The full model is here.

Questions — the core

Questions in the model, full model is here

We use questions that specifically ask about techniques, approaches or processes used by the team e.g., “The testing specialist is not the target of blame if a bug in production is discovered”. These questions are categorized into 8 areas which are further split into categories. Every category comes with a description of an idealised team, setting the stage for team members who are evaluating these questions. Thanks to that, people have a similar understanding of what we aim at.

There is a notion of a goal for each question, so the result later is compared with this goal the team is setting for themselves. You can work on this with your colleagues or subject matter experts from your company.

Scoring

In the evaluation phase every participant — team member — scores, each question using a 1 to 5 scale. A 1 is the lowest and 5 is the highest score. Or in the case of techniques 1 stands for ‘we don’t apply this’ whereas 5 is something like ‘we apply this technique every time it fits’. The voting is subjective and depends on the view and experience of each team member. Practice is showing that the average of votes is quite accurately weighing different opinions.

What is most important in scoring is setting the stage using the description of the idealised team in each category, so people are on the same or at least the similar page.

Visualisation

Results are automatically processed and displayed in radar charts against the goal we have set for our team. We use keywords to help with navigating from charts back to questions.

It is important to understand that the model is not aimed at comparing teams or finding your level. The purpose is to show the team areas the team can improve.

Using the model in engineering teams

I’d like to share few things we settled on in Jamf which can help you with trying this model in your team or organisation.

Firstly, for evaluating a team using this model you will need volunteers — your teammates. It can be tricky to sell an evaluation consisting of 100 questions, so I recommend starting with early adopters first. People who are genuinely motivated to improve things are ideal candidates.

You can always expand this group later to the whole team or the majority of the team once you have early results and it is easy to explain the benefits of undertaking the evaluation. Your results will get a bit more precise with adding more teammates.

This early stage is also a good one for setting the goal for your team. You can work with your colleagues or subject matter experts from your company to set the goal upfront. It will help you with understanding questions — you need to understand them to set the goal. I recommend setting a goal for your team in advance. Thanks to that you won’t be tempted to lower the bar after you see the results. It is possible to do this step after the evaluation but you need to make sure you are not biased.

Collecting results

The most popular form for collecting results for us is a meeting during which every participant — team member — scores each question in the spreadsheet. This meeting is a long one and can take 2–2.5 hours. I understand your fear when hearing about such long meeting. There are ways around it if this is a no-go for you or your team for sure.

A longer meeting with a short break in the middle is ideal to keep the continuum. But you can run 2 shorter ones or even several of them planned as part of your regular retrospective where you can go through one category per retro. Another way is using a survey instead of a discussion. Make sure you are maintaining the continuum and some sort of momentum so you are not losing your participants in a deadly long project or questionnaire.

Meeting has one big advantage though and it is the possibility of facilitating discussion and getting people on the same page during the process. It simply means reading the description of the idealised team in each category and having short chat about it. This is just enough to create a shared understanding.

No matter the form you pick, you need to invite a bunch of people to participate. It can be a small audience — early adopters or it can be the team lead and tester from the team. It even can be a whole team.

We got pretty accurate results trying it with a group of 2 or 3 people from the team, just make sure that different roles are represented. What is usually a good next step is asking the rest of the team to take iteration #2. It’s easier to convince more people with tangible results in your hands.

Improve your reality (quality)

After spending hours of your time and time of your team, you ought to do something with results.

For our teams, things we found were a mix of known team and organisation-wide issues which just got highlighted in this e.g. missing contract testing layer in our test automation mix. Sometimes it was a surprise for a team as one around creating time for learning activities in one team.

Thanks to a goal set in the model it should be fairly easy for you to pick areas to focus on. Don’t hesitate and create action items out of it, mix bigger initiatives with low hanging fruit and improve your quality culture, improve the reality of your team.

What will you get from it? Well improving your quality culture will lead to a team producing high-quality software.

--

--

Ladislav Cicon
Jamf Engineering

Father of three, testing and quality enthusiasts, biker…