How to Determine OKRs for Software Quality Assurance

Putra Agung Pratama
8 min readJan 13, 2024
Image by DALL·E 3

When you come across the terms ‘Software Quality Assurance’ or ‘Application Tester’, what immediately comes to mind is probably someone who is responsible for ensuring there are no bugs in an application. While this is true, but have you ever wondered about the ins and outs of their goals? What specific targets do they need to achieve? How they measure it?

Let’s explore and delve into the broader goals that SQAs/Testers must achieve in the field of software development!

Why do SQA need Achievements?

Like other departments in an organization, setting and achieving goals or milestones benefits for several reasons:

  • Continuous improvement,
  • Performance Measurement,
  • Alignment with Organizational Goals,
  • Stakeholder Satisfaction,
  • Resource Optimization,
  • Risk mitigation,
  • Professional Development.

Achieving SQA is critical to fostering a culture of improvement, aligning with organizational goals, motivating teams, ensuring customer satisfaction, and ultimately contributing to the overall success of the product or service provided.

How are Achievements created?

Achievements in SQA are typically articulated as Key Results to enhance clarity and facilitate easier explanation of what needs to be accomplished, as well as to streamline the measurement process. The following outlines the sequence for formulating Key Results in an organization:

For example, the goal is to create “Nasi Goreng/Fried Rice with Sunny Side Egg”, every level of the organization plays an important role. The following table explains the step-by-step process and looks at the activities of the actors involved in achieving culinary excellence.

Objective Key Result and How It Is Measured:

Now, let’s explore some real-life examples of key results in action.

Example Objective 1:

“Reduce the Production Bug to 10% from Previous Quarter”.

Main Formula:

Key Results:

  1. Implement Automated Testing for Critical Flows
  2. Increase Automation Coverage
  3. Reported Production Bugs No More Than 20% from Defects.
  4. Test Plan vs Test Cases Accuracy Improvement

Key Result Description:

  • (KR 1) This key result aims to improve software quality by automating tests for critical user flows. Automated testing ensures consistent and thorough validation, reducing the risk of critical bugs in key areas.
  • (KR 2) This key result aims to elevate testing efficiency by expanding automation coverage. The objective is to automate a higher percentage of test scenarios, streamlining validation processes and minimizing manual efforts for improved software development.
  • (KR 3) By enforcing this threshold, the team seeks to detect and address issues early in the development process, minimizing the chance of critical bugs reaching the production stage.
  • (KR 4) This key result aims to enhance the accuracy of test scenarios outlined in the test plan by ensuring alignment with the corresponding test cases created. The goal is to improve the precision of test planning, fostering a more effective and reliable testing process.

Key Result things-to-do:

(Key Result 1) Implement Automated Testing for Critical Flows

  • Measurement: Test Coverage: Measure the percentage of critical user flows covered by automated tests.
  • Success Criteria: 90% Test Coverage for Critical Flows: Success is achieved if automated testing covers at least 90% of critical user flows, providing a robust safety net for key functionalities.
  • Notes: Critical Flow is mostly a scenario that contains the highest priority scenarios, Edge scenarios, and Flaky scenarios or other scenarios that often cause bugs.

(Key Result 2) Increase Automation Coverage

  • Measurement: Automation Coverage, Measure the percentage of test scenarios covered by automated tests.
  • Success Criteria: 95% Automation Coverage: Success is achieved if automated tests cover at least 95% of test scenarios, ensuring a thorough automated validation.
  • Notes: Unlike Critical Flow that only maintains for P0 scenarios, this regression also need to cover until P3 scenarios.

(Key Result 3) Reported Production Bugs No More Than 20% from Defects.

  • Measurement: Count the number of bugs identified in the production environment vs Defects that Count based on total number of issues discovered during the development phase.
  • Success Criteria: 20% Threshold: Success is achieved if the percentage of production bugs compared to total defects found during development does not exceed 20%.
  • Notes: We can assume that “Defect” is a bug that is found during the development phase. Production Bug and defect’s criteria for this example is only for High to Critical bugs.

(Key Result 4) Test Plan vs Test Cases Accuracy Improvement

  • Measurement: One of the output of Test Plan is “Test Scenarios”. So the measurement would be Test Scenarios vs. Test Cases Alignment Rate: Evaluate the percentage of alignment between the test scenarios outlined in the test plan and the actual test cases created.
  • Success Criteria: Success is achieved if 95% or more of the planned test cases are executed during the testing phase..
  • Notes: Conduct regular reviews between planned test scenarios and created test cases to identify and address any discrepancies promptly.

Example Objective 2:

“Efficient Hotfix Management”.

Key Result:

  1. Hotfix Turnaround Time.
  2. Success Rate of Hotfixes.
  3. Root Cause Analysis (RCA) Completion Time.
  4. Hotfix Deployment Frequency

Key Result Description:

  • (KR 1) Measure the time taken from the identification of a critical issue to the successful deployment of the corresponding hotfix.
  • (KR 2) Assess the percentage of hotfixes that are successfully implemented without introducing new issues.
  • (KR 3) Measure the time taken to conduct and complete Root Cause Analysis document for issues addressed by hotfixes.
  • (KR 4) Gauge the team’s responsiveness to critical issues by monitoring the frequency of timely hotfix implementations.

Key Result things-to-do:

(Key Result 1) Hotfix Turnaround Time

  • Measurement: Measure the time taken from the identification of a critical issue requiring a hotfix to the successful deployment of the corresponding hotfix.
  • Success Criteria: Success is achieved if the Hotfix Turnaround Time is consistently within a specified timeframe, for example, 24 hours.
  • Notes: Regularly review and optimize the hotfix deployment process for continuous improvement.

(Key Result 2) Success Rate of Hotfixes

  • Measurement: Calculate the percentage of hotfix deployments that were successful in resolving critical issues without introducing new problems.
  • Success Criteria: Success is achieved if the Success Rate of Hotfixes consistently meets or exceeds a predetermined threshold, for example, 95%.
  • Notes: Successful hotfixes can be determined whether their implementation does not cause the same issues or new issues.

(Key Result 3) Root Cause Analysis (RCA) Completion Time

  • Measurement: Measure the time taken to complete Root Cause Analysis for identified issues or incidents.
  • Success Criteria: Success is achieved if the RCA Completion Time consistently meets or is below a specified timeframe, for example, 48 hours.
  • Notes: Prioritize prompt and thorough root cause analyses to enhance the efficiency of issue resolution.

(Key Result 4) Hotfix Deployment Frequency

  • Measurement: Calculate the frequency of deploying hotfixes within a given time period (e.g., per week or per month).
  • Success Criteria: Success is achieved if the hotfix deployment frequency aligns with or exceeds the targeted deployment rate.
  • Notes: Monitor the frequency of hotfix deployments to ensure timely responses to critical issues.

Measuring these two objectives appears straightforward as they rely on simple and direct formulas for key results. However, the challenge lies in obtaining the necessary data to support these calculations. The final example adds an additional layer of complexity as it involves a certain level of subjectivity in the assessment process.

Example Objective 3:

“Skill Set Enhancements”.

Key Result:

  1. Implement a Training Program.
  2. Individual Skill Assessment.
  3. Application in the projects.

Key Result Description:

  • (KR 1) To encourage the team to engage in continuous learning and active team participation.
  • (KR 2) Evaluate individual skills through targeted assessments to identify areas for improvement and track overall skill improvement.
  • (KR 3) These key outcomes focus on applying newly acquired skills in real-world projects, allowing team members to practically demonstrate their proficiency.

Key Result things-to-do:

(Key Result 1) Implement a Training Program.

  • Measurement: Develop and launch a comprehensive training program. and Track the participation and engagement of team members in the training sessions.
  • Success Criteria: Success is achieved if 90% of team members participate in at least one training session within the next quarter.
  • Notes: There are two ways to evaluate the engagement matric: Using feedback surveys, you can set a threshold for a satisfactory level of engagement (for example, an average survey score above 80%). Using quizzes, the average score or percentage of correct answers can be used as an engagement metric.

(Key Result 2) Individual Skill Assessment.

  • Measurement: Since this field is quite challenging, there may be some data required. Start by creating a Skill Matrix related to each SQA Level and using Quizzes to test their understanding.
*Ensure both weight equal to 1 (100%) to maintain a valid weighting system. Adjust the weights based on the importance you assign to each component.
  • Success Criteria: SQA is able to achieve and adapt its skills based on the skills matrix created, whereas QA must also achieve 80% of its quiz score to ensure understanding of KR 1.
  • Notes: It needs a lot of measurements area, actions and comprehensive data are needed to support the formula.

(Key Result 3) Application in the projects.

  • Measurement: Assess the number of projects where team members actively apply the learned skills and Evaluate the complexity and impact of the applied skills within each project.
  • Success Criteria: The total Rate for “Effectiveness Criteria” must be at least 80%
  • Notes: The way of measure this KR need another measurement based on Effectiveness Criteria. So we should to measure each Effectiveness Criteria items at first.

Closing Thoughts:

These are some examples of Software Quality Assurance (SQA) measurements, where metrics guide your efforts to achieve excellence. Remember, it’s not just about the numbers; it’s about continuous improvement, adaptability and delivering high-quality software.

Ensure OKRs are SMART — specific, measurable, achievable, relevant, and time-bound. Regularly review and adjust based on team progress and changing priorities. Involve team members in goal-setting for ownership and commitment to skill development. Align metrics with your objectives, empower your team, and set new benchmarks for excellence. Happy measuring!

--

--

Putra Agung Pratama

SQA Lead | Startup Aficionado: Crafting Teams from Scratch!