SDET: Power of Test Automation Strategy

Kostiantyn Teltov
14 min readMar 5, 2024

Hello QA community,

Many technical professionals find working with static documents, such as test automation strategies, test plans or guides, less appealing. This feeling stems from a natural preference for working with dynamic code, which is often more interesting and challenging. One of the documents mentioned was a test automation strategy. Frankly, I was one of those people. But something changed in me when I realized a power of this document. In this article we will look at the importance of test automation strategy and some of the details. Fasten your seatbelt and let’s fly into the magical world of test automation strategy!

I wonder why it is worth it?

Really. Why? Let’s try to think and analyze together

Planning and Analysis

Creating a test automation strategy involves thorough planning and analysis. You need to understand the application under test, identify the most critical workflows for automation, and determine the scope and objectives of automation. This requires a deep understanding of the software, its users, and the environments in which it operates.

Conclusion: In other words, you will see a high-level picture of the test automation process and have all the books on the shelf (in your head).

Toolset

Selecting the right tools for test automation is a crucial step that demands significant effort. The chosen tools should not only be compatible with the technology stack of the application but also align with the team’s skills and the project’s budget. Evaluating and choosing the most appropriate tools involves researching, comparing features, and often conducting proof-of-concept tests to ensure compatibility and effectiveness.

Conclusion: You will know exactly what tools you gonna use. You will have understanding of the technology limitations. You know what skills required for this toolset. You are armed and ready!

Environment/Infrastructure

Setting up the infrastructure for test automation, including test environments, data management systems, and integration with CI/CD pipelines, requires careful planning and execution. This infrastructure must be reliable, scalable, and maintainable to support the automation efforts over time.

Conclusion: You know your environments requirements. You know how delivery process happens. What quality gates you have and what limitations you may need to resolve.

Risks analysis

Incorporating risk analysis into the development of a test automation strategy is essential because it helps in identifying, assessing, and prioritizing potential risks associated with the automated testing process. This step is crucial for minimizing the impact of these risks on the project’s success and ensuring the efficiency and effectiveness of the test automation efforts.

Conclusion: You have a concrete plan of what to do if a risk occurs. It some kind of shield!

Reporting and knowledge sharing

Having a test automation strategy makes it easier to communicate with other team members, including managers, developers and other team members. It is easier to present your project to other teams and newcomers.

Conclusion: You have a document with a clear test automation workflow. You can share it with your team or organization members in order to show your project in a good shape(I hope:)).

We’ve considered some of the key reasons why a test automation strategy takes time to develop. Now let’s consider some of the content you might want to include in the test automation strategy.

Test Automation Strategy

Before we start, I want to highlight we live in Agile world. It means, you may want to include all options you believe can be useful for you and your project/product. In this section we will consider, probably the most common parts of the test automation strategy. I’m not opening America, a lot of them were written and can be a part of the many templates you may find in the Internet. Ready? Wingardium leviosa!

Strategy Summary

It should not be big. But everything should have a reason for doing it.

  • Briefly describe the purpose and objectives of the test automation strategy.
  • Highlight the expected benefits and key components of the strategy.

Example:

Objectives:

  • Improve Test Coverage: Automate tests across different levels (unit, integration, system, and acceptance) to ensure thorough coverage and identify defects early in the development cycle.
  • Increase Efficiency: Reduce the time required for regression testing, enabling faster feedback loops and accelerating the pace of development.
  • Enhance Quality: Identify and address defects early, improving the overall quality of the software and reducing the cost of fixing bugs.
  • Support Continuous Integration/Continuous Deployment (CI/CD): Integrate automated tests into the CI/CD pipeline to ensure that changes are automatically tested and ready for deployment.

Expected Benefits:

  • Reduced Manual Effort: Significant reduction in manual testing effort, allowing the team to allocate more time to exploratory testing and other high-value activities.
  • Faster Release Cycles: By automating regression testing and integrating tests into CI/CD pipelines, we can achieve faster release cycles and quicker time to market.
  • Improved Software Quality: Early detection of defects and higher test coverage lead to improved software quality and user satisfaction.
  • Scalability: Automated tests can be easily scaled to cover more features and functionalities as the project grows, without a corresponding increase in time and resources.

Scope/Test Levels

We probably have to automate everything. Sometimes it does not make sense or it does not cost an afford.

  • Define the boundaries of the test automation initiative (e.g., which applications or parts of the application will be automated).
  • Specify what will be automated and what will remain manual.
  • Specify test levels with brief explanations

Example:

  1. Automation Boundaries:

Applications/Parts to be Automated:

  • Core Functionalities: Key features and functionalities that are critical to the application’s operation and are used frequently.
  • Regression Tests: Tests that are run frequently to ensure that new changes have not adversely affected existing functionality.
  • Data-Driven Tests: Tests that can be easily parameterized and executed with different sets of data.
  • Performance Testing: Automated scripts to assess the application’s performance under various conditions.

Areas to Remain Manual:

  • Exploratory Testing: Creative testing to explore the application’s limits and discover unknown issues.
  • Usability Testing: Assessing the application’s user interface and user experience requires human judgment.
  • Complex Scenarios: Testing scenarios that are complex, rarely executed, or would require an excessive amount of effort to automate effectively.

2. Test Levels:

  • Unit Testing: Automated testing of individual components or modules of the application in isolation. The focus is on the internal logic and functionality of the components.

Note: Usually supported by developers. QA may assist if they feel that some cases should be moved down to a lower level for wider coverage. Nice to have some coverage measure tool.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

  • Low-level integration tests: Automated testing of the interactions between components or systems. These tests may still use mocking mechanisms or data in test containers and work without specific environment

Note: It is easier to test a lot of boundary use cases without starting heavy full integration environment. Usually supported by developers. QA can participate when believes something will be easier to cover on this level

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

  • Third party vendors contract tests: These tests are focus is on the testing of “contract” or agreement that specifies the expected requests and responses.

Note: This one type of test is popular these days when you depend on third party vendors. They can catch some contract changes that can potentially break our logic. Developers usually support it

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

  • e2e Integration Testing: Automated testing of the interactions between components or systems. This level tests the interfaces and flow of data between units.

Note: These tests are more heavy because require launch of the integration environment and test data setup. One of the most important layers QA team responsible for. Anyway, it is nice to use smart approach. Potentially a lot of cases were already covered by low-level integration tests, so you need to cover main flow scenario’s and some of the alternative flows.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

  • UI Component Testing: Automated testing covers UI component appearance.

Note: These tests use some mocked data in order to verify how component displays data. Usually, responsibility of the Front-End developers. As usually, QA can be an advisor. In some cases, it’s easier to test some display logic without launching heavy environments. E.g. specific label display, pagination behavior when some number of the results returned and etc.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

  • System Testing: Automated end-to-end testing of the application in an environment that simulates production. This includes testing the application’s overall behavior, performance, and security.

Note: These are the most rigorous and comprehensive tests available. They mimic real user behavior. However, you should remember not to overcomplicate. For example, break it down into pieces. Maybe you don’t need to use the UI login to test this feature, or maybe you can create/update/clean up some data with some API calls because you’re testing a specific area. These are usually the most painful for the QA team. You may also want to break it down into a few layers. E.g. System, Acceptance.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

The test levels and names above are examples only. Names and levels itself may be different for your project because you have some specific needs. Here I just tried to describe the basic idea.

Tools/Technology

Discuss the tools, libraries, and architectural decisions that facilitate unit, integration, system, and acceptance testing within your chosen framework.

Example:

Just a quick example. You may want to describe it in more detail

Unit Tests: C#, XUNIT, Moq (Mocking Framework)

Contract Testing: C#, PACT (for contact testing)

Low-level integration tests: C#, XUNIT, Docker Containers

UI Component tests: React

E2E Integration Tests: C#, RestSharp, NUnit, FluentAssertion and other data-provider libraries

System Tests: NodeJS, Playwright/test, TypeScript

Environments

Describe what environments setup required for each level of the tests and maybe there is some specifics depends on environment type, like DEV, Testing, Production

Also it may require some test data preparation and clean up strategies(Maybe not. Just an idea).

Unit Tests: Does not require specific environment.

Contract Testing: Does not require specific environment.

Low-level integration tests: Require running of Docker container for database emulation.

UI Component tests: Does not require specific environment. Will be tested as react components.

E2E Integration Tests:

  1. Require a full environments setup.
  • SQL and Mongo databases.
  • Microservices A, B, C.
  • UI deployment

2. Requires data setup and clean up strategy.

  • We cannot prepare all data using API, so we need to have already predefined setup
  • API does not support full data clean-up, we need to have an image of environment data in order to deploy it once per 3 month.

Execution plan

Unit Tests:

  • We are planning to run these tests on each PULL REQUEST.
  • Pull request should be blocked to be pushed in case if coverage level less than 80% or any test is failed(Developers please do not remove assertions:))

Contract Testing:

  • Will be run once third party vendor library was updated
  • We will not block pull request process, but will send warning information to specific Teams channel (You can follow other strategy if you want)

Low-level integration tests:

  • Pipeline agent requires to have setup of Docker/Docker-compose
  • We are planning to run these tests on each PULL REQUEST.
  • Will block PR pushing in case if any test is failed

UI Component tests:

  • We are planning to run these tests on each PULL REQUEST.
  • Will block PR pushing in case if any test is failed

E2E Integration Tests:

  • We are planning to run these tests once new DEV environment was deployed
  • We are planning to save reports history for some period and send details to the Teams channel
  • We are not blocking continues integration process. User with release writes can manually trigger next stage deployment

System Tests:

  • We are planning to run these tests once new TEST environment was deployed
  • We are planning to save reports history for some period and send details to the Teams channel
  • We are not blocking continues integration process. User with release writes can manually trigger next stage deployment

Once more, it is just a general highlight. You may want to add some tests you run even on prod environment, but be carefully.

Reporting and metrics

For the “Reporting and Metrics” section of a test automation strategy, defining specific Key Performance Indicators (KPIs) for each test level helps in evaluating the effectiveness of the testing efforts and the overall quality and readiness of the project

1. Unit Tests

  • Code Coverage: Percentage of the codebase executed during testing. Aims for a high coverage rate to ensure most of the code is tested.
  • Pass/Fail Rate: The ratio of passed tests to the total number of tests. High pass rates indicate stable unit functionality.
  • Test Execution Time: Total time taken to run all unit tests. Helps in identifying performance regressions.

2. Contract Tests

  • Number of Contract Breaches: Instances where the actual service responses deviate from expected contract definitions. Fewer breaches indicate better alignment between services.
  • Contract Coverage: The extent to which the interactions are covered by contract tests. A higher percentage ensures more interactions are verified against the contract.

3. Low-Level Integration Tests (Mocked data or DB containers)

  • Integration Coverage: Measures the proportion of integrated components or services tested. A higher figure suggests thorough testing of interactions.
  • Mock/Stub Accuracy: The percentage of mock or stub data that accurately represents production data, ensuring realistic test scenarios.

4. Mocked UI Component Tests

  • Component Coverage: Percentage of UI components tested through mock scenarios. Aims for comprehensive coverage to ensure UI reliability.
  • Visual Regression Issues: The number of instances where the UI appearance deviates from expected designs, indicating potential UI inconsistencies.

5. E2E Integration Tests

  • System-Wide Coverage: The extent to which end-to-end workflows are tested, ensuring comprehensive testing of integrated systems.
  • Critical Path Success Rate: The success rate of tests covering critical business workflows, important for assessing the readiness of core functionalities.

6. E2E UI Tests

  • UI Workflow Completion Rate: Percentage of UI-driven workflows that complete successfully, indicating the reliability of user-facing features.
  • Browser/Device Compatibility Issues: Number of issues found across different browsers and devices, important for understanding cross-platform reliability.
  • User Journey Success Rate: Measures how often complete user journeys (from entry to the desired outcome) are successfully executed without issues.

Roles and responsibilities

Define the roles that can be involved in the test automation process. It is important to understand that these are not just people who write code. Product owners/business analysts may also be involved in this process. For example, if you are using the Gherkin BDD approach, some business people may create Given-When-Then scenarios.

Example:

DEV Lead/Architect:

  • Acts as low-level tests architect
  • Coaches DEV team members
  • Communicate with QA Lead on test strategy

QA Lead/QA Automation Lead/QA Architect:

  • Defines overall test automation strategy.
  • Acts as high-level test automation solution architect
  • Coaches QA team members
  • Guides team communication regarding automated tests
  • Controls automation tests quality

QA with Automation skills:

  • Creates/reviews/controls tests
  • Splits test scenarios on different level (Testing Pyramids)
  • Improves test stability
  • Communicates with other QA (manual skills)

I like to repeat, but it’s important. I have only given a snapshot here. It may be different depending on the specifics of the team, project, test levels.

Risks

It is important to be prepared for worst-case scenarios and to know what to do if they occur. Your risk table may look like this:

Risk name, severity, probability, mitigation plan

I was lazy to write this point:) Put what you think might be a risk where you think it might be a risk. If you are prepared, you are armed.

Before goodbye

The examples we looked at today were rather abstract. I don’t pretend that it is complete and includes all points. More than that, I only described one functional aspect. The idea was to show an importance and to encourage. To make it part of your quality assurance process. Your test automation strategy may be a little bigger or a little smaller. It may include more reporting details, cross-team communication, and other things. The important thing is that it is useful for your needs. Maybe you want to copy some of the existing templates? But do you really need all these steps? Is this template missing some steps that are important to you? Be flexible, add the steps you think are important.

Finally, I would like to leave a few tips. I would like to send these tips to myself in the past.

A few tips

  1. Do not hesitate to start writing test automation strategy in the begging once you have analyzed your project needs.
  2. Don’t stop if you get stuck. You are always learning and may find it later and add to it.
  3. A test automation strategy is not static. You learn, find new ways/gaps. The strategy should evolve with you. Add new sections. Remove sections don’t work
  4. Consult with team members(not only QA), with other teams. Several heads are better than one.
  5. Make it available in your organization (if there are no security issues, of course). It is nice if you can provide a navigation link to managers or other teams. If your organization has Teams WIKI pages, this is one of the best places to put it.
  6. Demonstrate it and encourage other team members to participate in the test automation strategy lifecycle. Quality is a team responsibility.
  7. You should have the courage to break something that is not working. Listen and analyze feedbacks. I know you are powerful. But together, you are invincible.
  8. Once you start working on it regularly, you will find that it is as enjoyable as solving any code challenge. It is just a different kind of quest.

Short story

In one of my previous jobs, two teams had to transfer knowledge to another team to support the project. Functional part of these projects were close one to each other. We organized a meeting where both teams(Team A, Team B) transferred knowledge to third one (Team C).

  • Team A was fully prepared not only with code repositories but also with good documentation including test automation strategy, plans and user guides.
  • Team B had the opportunity to demonstrate code only.

Which team do you think Team C should have asked more of?:) I know, you cannot judge only by one point.

What would you like to add to your test automation strategy?

That’s all for today. Be Agile, think like a bug and love Test Automation Strategy!

--

--

Kostiantyn Teltov

From Ukraine with NLAW. QA Tech Lead/SDET/QA Architect (C#, JS/TS, Java). Like to build testing processes and help people learn. Dream about making indie games