When Manual QA becomes a toothless tiger

Bhargav Sangani
7 min readAug 7, 2023
Photo by Nick Karvounis on Unsplash

Year 2019, I switched to a completely new city, a city of dreams, Mumbai. There were lot of mixed emotions, nervousness, excitement, etc. I was hardly a 2 years old as a Software Engineer, didn't have a whole lot of experience to think of big picture stuff.

First day went well, colleagues were friendly. Just explored all the tools they were using day to day like GitLab, Slack, Google workspace. Not anything ground breaking so far. Met few people from development and product design, and called it a day.

The Big Reveal of Manual QA Team

Next day I join a scrum meeting, and the manager introduces me to the QA team. Much to my utter ignorance, I had never heard of a specialised QA Team in the software development ever. The QA lead later explained me the process of software delivery we were supposed to follow. Let me give a quick illustration.

At the moment it made perfect sense. Developer is not be the only one going to test the code, second eye is always there to approve the code before it makes into production.

In my first 2 companies, I have never had dedicated QA. They were small software shops, and the developer writing the code was fully responsible for the quality and integrity check of the feature. Coming from that environment, this new way of working sounded god send. I had finally landed a dream company I was yearning for.

I am eagerly waiting to be assigned a ticket and have my first contribution land in production. Meanwhile let me checkout the code quality. I clone the backend code to get some insights into the nitty-gritty of the product. Enthusiasm level 100%.

Read few APIs, written in Node.js. Okay, variable names are cryptic. No code formatting tools used. No worries, lets try to read some automation tests to understand the behaviour. Enthusiasm level 80%.

I try to search the whole workspace Ctrl + Shift + F with few patterns like test*, *spec.js, tests/*, etc. No results at all. Okay, maybe they are using some other pattern, let me ask a guy already working on the project. He replies,

We don’t write automation tests here. Just write the code, test the scenarios given by QA team and push for the manual QA to test further.

Don’t know how to react. Internal monologue:

Maybe these manual QA team would be so good at their work, my new team has completely abandoned the idea of automation tests. Lets follow along, excited to work with team who has different philosophy than what I am used to follow.

Days pass. I deliver my perfect code plucked straight from the code quality garden of Eden. QA team has prepared few test cases and gave me to test all scenarios and hand off the code to them once I check those scenarios myself. Tests pass and I hand-off to the QA team.

The QA guy, Tim, a pretty good in terms of attention to detail and diligent engineer started testing my change on staging server. He found few cross browser issues, which I fixed with little effort. In the meantime I try to wander across the project workspace on google. They are maintaining the bugs in google sheets 😑. Multiple sheets have 10 to 100 of bugs piled up and deprioritized due to sheer amount of work required to resolve them, and lack of enough engineering resources.

Once the QA guy approved the change, he found few more critical issues which were also existing in the prod version. He added them to one of the excel sheets for bugs and gave a go ahead for push.

Later part of the week we had a scrum meeting where the mighty PM sits with all peasants to tell them what was the velocity of team member, overall velocity of the team, what went wrong and what went right. Almost all members were siting with depressed faces with average velocity of 1–2 hours a day. Yes, they were calculating the velocity in hours.

QA Team was grilled in the meeting, stating the fact that it is them who are responsible to catch all the bugs before code moves to production / pre-production. They would just happily accept the feedback saying “Yes, we will be more careful in the future.”

What followed next couple months was complete chaos in terms of issues raised by clients, product crashing while sales team was doing a demo to prospects. Every day there was a new fire to put down which would otherwise burn through our revenue. All these problems would be watched from same lens:

What is QA team doing? They are supposed to catch absolutely everything, right? right?

It took them a huge loss of potential revenue to understand that their manual QA is reduced to a toothless tiger since developers are not given the ownership of the code.

Shitty code quality didn’t cause a flinch on developers as long as the end to end manual tests given by QA team passed reliably. There was not much thought given to concurrency, the system would clearly break under extreme concurrency. But hey, QA team could not reproduce the concurrency issue, so why bother thinking about it?

How we fixed it?

Static Analysis

We planned a huge refactor of core chat engine written in Node.js as well as frontend written in React. There was no notion of linting and formatting in entire codebase. Added ESlint with a strict config. Planned a complete replacement of duplicate code with specialised classes encapsulating the functionality and allowing to extend as required.

Bug Reporting

Introduced GitLab issues to manage bugs instead of Google Sheets. Now we could reference the exact bugfix PR in the issue and reflect in the future about what exactly we did to solve a bug when something related happens. There are host of advantages apart from this which can’t be mentioned in few sentences.

Automation Tests

Planned gradual addition of unit tests and integration tests as we refactor the core functionality. None of the developers had experience writing automation tests at scale, since entire team was working in this startup right from the college and no one had different perspective to implement this. We had to train the developers to write automation tests and write them while modifying any part in the codebase.

Added continuous integration pipeline to check for code quality and run these tests as new PR is raised.

Code Review

Introduced compulsory peer reviews before any code makes into staging for UAT — User Acceptance Testing. This way developers could help and learn from each other while keeping the quality of code in check.

QA Team was little confused and curious at the same time considering the amount of time devs were taking to implement new features / refactoring.

Results started showing in 6 months. Production fires requiring a whole army of developers on the fly reduced to almost 0.5 a month compared to 1 every 2 days. Our core chat engine became stable like zen garden. Developers started to lift way more weight in terms of reliability of the software.

Evolution of QA Team

Photo by Marek Piwnicki on Unsplash

As developers started thinking firsthand in terms of code quality and reliability, our velocity to deliver the feature increased. QA Team now did not have to be paranoid about a probable bug lurking in the dark even though everything seems to work fine at the moment. Earlier QA team was responsible for exhaustive testing of all the possible scenarios humanly possible. Now they had to just perform exploratory and acceptance testing.

Now manual QA was not the mere punching bag to punch when a severity 1 issue arises but it was merely a tool to have a humane feedback on the code change.

Our QA team now started generating ideas in terms of how certain feature under test could be improved. They started writing clear and detailed issue descriptions.

Our final delivery process looked like this now:

What I learned from the whole year of this exercise at this company was:

Manual QA is not the last stand

Manual QA will work against you when you try to use it as the last stand. Developers will try to dump the code change to QA team once basic tests pass, which they should be dumping to automation. It was a significant waste of human potential to allocate a manual QA team to execute a complete testing suite.

Shift left in testing as much as possible

Quality when shifts left, manual QA becomes most fruitful since they will get time to perform exploratory testing, user acceptance testing, etc.

Your manual QA team is humans. If the value stream is designed such a way that entire burden of quality is on manual QA, the process is inherently flawed and destined to generate unreliable software. Utilize manual QA, i.e. humans to assess your software’s user experience, while delegating the exhaustive quality checks to robots, i.e. automation testing.

Balance between automation and manual QA is paramount to creating a value stream which delivers reliable software at scale.

Peace!!

--

--

Bhargav Sangani

Software Engineer who can't write a short bio to save his life