Automation vs Testing

What does it mean to test rather than automate?

Lana Begunova
11 min readMar 23, 2024

Software testers (SDETs) might have deep knowledge of automation techniques for web browsers and mobile devices. However, automation is not the same thing as testing! The objective for learning automation is becoming a better software tester. So, how do we distinguish the concepts of automation and testing?

Automation vs Testing

Automation is not the same thing as testing.

While software testing frequently incorporates automation nowadays, and automation is commonly utilized for testing software, it’s important to note that these two concepts are fundamentally distinct. Nevertheless, avoiding their mixup can prove challenging. Such a mixup poses a genuine risk, as it may divert attention towards automation, potentially overshadowing the primary objective, which is effective testing.

https://www.linkedin.com/posts/james-bach-6188a811_one-of-the-more-embarrassing-things-to-me-activity-7170575836792782848-gqa3?utm_source=share&utm_medium=member_desktop

Automation services testing, not the other way around. It’s possible to use automation for counterproductive purposes, leading to obstacles.

Ultimately, automation serves as a tool to achieve our testing objectives. Simply automating every aspect of our app does not guarantee the creation of effective tests. Testing involves crafting the appropriate environment for our app. We have the flexibility to employ automation in various ways, even towards inappropriate goals, resulting in inadequate testing environments for the app.

We should prioritize our role as testers above that of automators, even though a significant portion of our time is dedicated to writing automation.

Therefore, it remains crucial to prioritize viewing our app from the perspective of testers first and foremost, relegating our roles as automators to a secondary position, even when we dedicate most of our time (75–95%) to writing automation code.

Think Like a Tester

How do we think as testers first, to ensure that our automation is put to good use? There are a few tips we can employ to achieve that.

Familiarize ourselves thoroughly with the app from the perspective of a user. Embrace a power user mentality by becoming proficient in using your own app to its fullest extent.

  • It’s essential for us to comprehend our product fully. This involves gaining a deep understanding of our application, the issue it aims to address, and the intentions behind its design and development. Without this comprehensive understanding, our ability to effectively test the application will be compromised, and we’ll struggle to empathize with the users who ultimately interact with it.

Understand the desires, intentions, and frustrations of the users. We act as guardians of app quality from a product design standpoint, not solely as bug detectors.

  • In light of this, it’s imperative that we understand our users. Who comprises our app’s user base? What objectives are they striving to accomplish with it? What are their wants and intentions? In a perfect world, the app would’ve been meticulously crafted with the user in focus, with product owners/managers deeply considering and addressing user needs. However, this isn’t always the reality. At times, there may be subtle discrepancies between the product and its users, which we, as testers, can help uncover.
  • In a proficient team, every team member bears responsibility for ensuring the product aligns with user needs. As testers, we often serve as the final line of defense (quality gatekeepers) for the user, capable of identifying potential flaws or features that may not have been adequately conceived or developed. It’s crucial not to underestimate the significance of this role. Testing is fundamentally about ensuring product quality, and as testers, we often possess an innate ability to intuit when something isn’t quite right with a release. Hence, prior to or during the automation, it’s vital to remain attuned to these considerations, prepared to provide prompt feedback to fellow team members.

Empathize with the users to uncover edge cases that may have been overlooked by product designers. However, discern which edge cases are significant and warrant attention, and which ones do not.

Pertaining to understanding our users, it’s essential to envision all the potential scenarios our users could encounter. This is especially crucial regarding user input. Apps are frequently developed with a narrow set of specifications, and developers often make assumptions about the types of input users might provide. For instance, it’s common for developers in North America to overlook the need for supporting Unicode in username and password fields. However, users from various regions around the world may require Unicode support to accurately complete forms in their native language. Similarly, developers might not consider languages that utilize right-to-left writing, and the impact this has on the user interface. These oversights occur frequently in development.

Internationalization/Globalization and Localization Testing — these kinds of testing differ but have the same goal, which is to ensure an exceptional user experience for global market users.

  • Localization is about adapting the UI and the content for a specific region and locale. It takes into consideration the region’s cultural and linguistic specifics. During this kind of testing we may verify translated text, addresses, format sequences, keyboard usage, visual alignments of the culture, time/currency formats, alignment of data, etc. For example, an Ebay use can choose the preferred language and the page contents get translated into that language. The content is localized to meet the requirements of a specific culture and region. And here’s an example of the Ebay app in Japanese:
https://www.ebay.co.jp/
  • Internationalization/Globalization focuses more on product features and capabilities that allow the app to be used globally, for example, the ability to change the language of the web site as in the prior Ebay example. The aspects we test are the support for multiple languages, different time zones, various numerical formats, differently formatted text (left→right vs right→left). Another globalization feature an app might avail is the offering of personalized items based on where the user is located.
https://learn.microsoft.com/en-us/globalization/methodology/software-internationalization

We typically don’t anticipate users having more than 100 or 1000 items in their cart, or a checkout amount exceeding 5–6 digits. However, as testers, it’s crucial to shift away from our perception of normalcy and adopt an exploratory mindset. We need to contemplate the full spectrum of possibilities regarding how individuals might utilize the app.

https://en.wikipedia.org/wiki/Edge_case

In the world of testing, when we consider a highly unconventional scenario that is still feasible, we refer to it as an edge case. These are situations that developers may not have anticipated to either accommodate or restrict, potentially resulting in unexpected behavior for users who encounter them. Our responsibility is to identify these edge cases and transform them into test cases. However, this task can be challenging due to the myriad of potential edge cases for many applications. Therefore, we must leverage our comprehension of the app and its users to make informed decisions about which edge cases merit inclusion in tests.

Ensure absolute clarity regarding test requirements. Test requirements are often provided to us in a vague manner, and it is our responsibility to seek clarification.

Ensuring complete clarity on the test requirements is essential. Often, the test requirements provided to us may turn out to be vague or ambiguous. Nonetheless, it falls upon us to translate them into a set of automated test cases, leaving no room for uncertainty. In such instances, it’s our responsibility to seek clarification regarding the intended meaning of the requirements. Frequently, a single requirement provided to us, such as “test the login flow,” will necessitate the creation of numerous individual test cases. Only when these test cases are combined will they truly verify the quality of the login flow.

Testing is both an art and a set of established practices, with numerous insights to be gained over time. However, the primary point to remember is that we are not merely machines, even when we program a software robot to test our app. Our responsibility is to engage our intellect and think critically from various perspectives when testing our software. This is how we add value as testers, rather than merely functioning as a human cog in a large system tasked with converting requirements into automation. The latter role is already being assumed by AI/ML bots (think of LLMs, like ChatGPT, which can take requirements as an input prompt and output automation scripts in a language of choice). However, aspects such as deep product understanding, user empathy, creative exploration, and translating vague requirements into well-defined test cases are unlikely to be automated anytime soon.

https://www.codeitbro.com/funniest-artificial-intelligence-memes/

Act Like a Tester

We’ve discussed some valuable conceptual guidance for adopting a tester’s mindset. However, regarding our test code, there’s also a plethora of practical advice that merits consideration.

Employ a reliable test runner and framework. Avoid reinventing the wheel, unless the aim is to understand its mechanics. Pytest stands out as an excellent choice for Python. Take the time to thoroughly understand the suite of testing tools and frameworks, going beyond just familiarity with WebDriver commands.

It’s always advisable to utilize a robust test runner and framework tailored to your programming language. Numerous technical concepts and features essential for various types of testing are universally applicable, making it impractical for us to develop them from scratch. For instance, the fundamental concepts of test passes and failures are critical components that should be incorporated into our test framework. While it’s possible to write automation scripts directly in Python, executing automation through command-line calls, determining the success or failure of a test becomes cumbersome. We would need to manually inspect command-line outputs for exceptions or irregularities, which isn’t conducive to scaling up our test suite. Instead, we should aim for the ability to execute any number of tests and receive a comprehensive report indicating which tests passed and which failed. This functionality is typically included in reputable test frameworks. Although there are various options available, I personally prefer Pytest, which is an excellent choice for Python. Therefore, it’s essential for us to not only master the WebDriver Python client APIs but also familiarize ourselves with the APIs and usage patterns of the chosen test framework. In Java, analogous options include TestNG, JUnit, and others.

Learn the entire Software Development Life Cycle (SDLC). Testing constitutes only a single component, and it’s crucial to understand its place within the broader context. Acquire knowledge about our team’s Continuous Integration (CI) server. Become a skilled DevOps who can set up testing systems from scratch as part of a CI pipeline.

https://en.wikipedia.org/wiki/Continuous_integration#/media/File:Continuous_Integration.jpg

It’s important to recognize that automated tests are typically not designed for standalone execution. Testing constitutes just one facet of the entire development cycle, with many other components also subject to automation. For the automated tests we create to be effective, they must be integrated into a pipeline within this system, commonly known as a Continuous Integration (CI) system. At the core of this system lies a Continuous Integration server, or CI server. Consequently, we may need to acquaint ourselves with the CI server utilized by our team, or even take on the responsibility of setting one up. Our responsibilities might extend to engaging with CI servers, understanding how to deploy and execute our test suite as part of a CI server pipeline, and even configuring remote Appium and Selenium environments to provide devices and browsers for our tests. This entails a significant amount of work, and traditionally, it would have fallen within the domain of the operations team to manage these tasks. However, in contemporary times, many responsibilities that were once exclusive to the operations team are now distributed among team members based on their specific roles and tasks.

A whiteboard snapshot from one of my meetings dedicated to the role of Testing in DevOps.

Employ effective software design patterns. Continuously enhance and refine your software proficiency. The optimal pattern often varies depending on the situation, so expose yourself to diverse scenarios. Cultivate software wisdom over time and through practice, transcending mere skill acquisition.

  • It’s important that as we embark on constructing an automation code test suite, we adhere to sound software design patterns . Here, patterns simply refer to consistent ways of structuring code to address specific types of problems (SOLID, YAGNI, DRY, Boy Scout Rule, etc.). To excel as an automated tester, continuous improvement in the software development practice is indispensable. There are countless patterns to explore. However, the challenge with effective software design patterns lies in their situational dependence.
  • A pattern deemed indispensable in one scenario might be excessive in another. Additionally, patterns typically entail trade-offs. While some patterns yield concise code, they may complicate comprehension until one becomes familiar with the codebase. It’s imperative to always comprehend the objectives we’re aiming to optimize concerning the design of our test suite and select patterns accordingly. Consequently, exploring design patterns outside the context of an actual and evolving test suite proves challenging. Developing a keen understanding of which patterns to apply requires time, practice, and extensive discussions with team members regarding the various options, as well as their advantages and disadvantages in specific situations.

Zen, Art, Values

We’ve covered some concepts which matter a great deal when we take our automation knowledge and move into the domain of actually testing our apps in good and responsible ways. And although testing is not the same as living, in conclusion I’d like to recommend a book by Robert Pirsig titled Zen and the Art of Motorcycle Maintenance: An Inquiry into Values. It’s a unique blend of autobiographical narrative, philosophical exploration, and an examination of the fundamental questions of how to live life.

Simon Stewart, the Selenium Project Lead, also advocates for joyful and rewarding development. In his video, he explains how neurochemistry and software development are connected.

I welcome any comments and contributions to the subject. Connect with me on LinkedIn, X , GitHub, or IG.

Happy testing and debugging!

--

--

Lana Begunova

I am a QA Automation Engineer passionate about discovering new technologies and learning from it. The processes that connect people and tech spark my curiosity.