Let’s be honest, in some places, QAs are not respected by other team members.

Karla Mieses
5 min readJan 15, 2024
Photo by Jason Goodman on Unsplash

“We, developers build stuff, QAs build nothing”, “I hate QAs and PMs”, “Why are QAs so annoying?”, “Anyone can be a QA”… and the list continues, although maybe I agree with the PMs stuff (just kidding). By the way, I am not offended, or at least not anymore. The truth is the QA Engineer position, or how a developer one day minimized to me to just “Tester”, tends to be underrated in some companies, especially when developers spend a significant amount of time building a feature or implementing a fix or improvement and it only takes a “few” minutes to be validated, let’s be real here, sometimes that could happen, however, I think most of the reason a team member minimize the QA work is because of the following reasons:

The scope of testing is not fully clear to other team members: Testing involves several validations, no matter how simple a change could be. The job of a QA is to validate the specific changes and understand what other areas these changes impact the application. Even if the particular changes are working, the QA needs to identify gaps between the current implementation and the application and report them to designers, developers, and PMs. The QA Engineer also needs to ensure the user experience aligns with business and user needs. And this is just the manual and very important part of it, not to mention creating or updating automation scripts.

Let’s imagine an easy example: the renaming of a new filter that contains filter chips when the filter is selected, and the filter is saved when a report is saved.

In this scenario, you (PM, dev… whoever) might think this is a very straightforward test. You log into the application and validate the filter has been renamed, easy! However, this goes deeper than that. The QA would need to execute manual testing by validating:

  • The filter has been renamed properly as per requirements.
  • The filter chip shows the new filter’s name.
  • The filter chip looks different under mobile or table view, especially if the new name is significantly longer.
  • The filter is saved when the report is generated and saved.
  • The filter and its options are displayed when retrieving the saved report.
  • The filter selection can be edited.
  • Accessibility testing:
  • Do I have access to the new filter through my keyboard?
  • This can be done by running an accessibility check like BrowserStack or Axe.

Then, the QA will stop to think 🤔 where else this new filter impacts the application, and then it will start doing exploratory testing to ensure nothing is missing.

Then, the QA would move to automation testing; this can start before or after delivering to production:

  • Can my above manual tests be automated?
  • Is it worth adding these tests to automation? How significant and impactful is this or the set of filters from this screen?
  • Do we already have automation testing for this filter? Do we already have automation for all the filters on this screen?
  • • What unit testing is in place for this feature?
  • Do we have proper test-ids for the filter’s label, filter’s input, filter’s dropdown, or any other type of filter it could be?
  • If not, let’s add the test-ids or ask developers to do it
  • Are the test-ids ready?
  • Yes? let’s build automation for it:
///<reference types="cypress" />describe("Medium Screen: Filters", () => {before(() => {
cy.logInThroughApi("my-medium-post@gmail.com", Cypress.env("password"))
cy.visit("/where-my-filter-is")
})
after(() => {
cy.deleteReportThroughApi()
cy.logoutThroughApi()
})
it("should check new filter 'Medium' exists", () => {
cy.getBySelector("my-filter-name").should("exist")
cy.getBySelector("my-filter-name").should("have.text", "Medium")
})

What the QA is scripting:

/// <reference types="cypress" />describe("Medium Screen: Filters", () => {const filters_name = [{
name: "Age",
test-id: "age-filter"
},{
name: "Medium",
test-id: "medium-filter"
},{
name: "Author",
test-id: "author-filter"
}]
before(() => {
cy.logInThroughApi("my-medium-post@gmail.com", Cypress.env("password"))
cy.visit("/where-my-filter-is")
})
after(() => {
cy.deleteReportThroughApi()
cy.logoutThroughApi()
})
it("should check filters are correctly named", () => {
filters_name.forEach((item) => {
cy.getBySelector(item.name).should("have.text", item.name)
})
})
it("should check new filter 'Medium' is saved when report is saved", () => {
cy.getBySelector("my-filter-name").type("Medium articule{ENTER}")
cy.getBySelector("save-report").click()
cy.visit("/go-to-reports")
cy.getBySelector("retrieve-report").type("Medium articule{ENTER}")
cy.getBySelector("my-filter-name").should("be.visible")

})
})
Note: Another and better approach would have been to create the report through API and then go into the report screen, search for the report and then check for the visibility of the filter chips.

As you can see, in the first approach, the QA just checks for the existence and visibility of the name, while in the second approach, QA included testing for the other filters on the screen and did (a simple for Medium purposes) user journey to validate the filter with the integration of the reporting functionality.

The point with this is that QA goes deeper than the visibility of the element. The idea is to simulate useful, critical, and impactful user journeys where the assertion of visibilities gives the QA the expected result of the behavior, but it is not what the QA is testing. Doing this takes time; these are tests built with intention. It requires exploratory sessions to understand the scope of the feature, which will lead to a good structure of the test. This also requires analysis of DOM elements to understand which elements are ideal to use as expected conditions to slow down the framework and construct robust tests.

QA needs to know it all, otherwise is not trustworthy”: QAs need to understand the product they are testing. The QA, in terms of the product, should be as proficient and knowledgeable as the PMs, POs, or stakeholders. The problem with not knowing the product well enough is that other team members won’t trust you. How can the QA Engineer ensure quality if they don’t understand the product? This is a common mistake among QAs, and it happens constantly as it’s not an easy thing to achieve. How can you become a product expert when you also need to be a QA expert, an automation expert, and stay on top of CI/CD processes? Well, this is something I am also trying to figure out.

Bugs leaking to production: When bugs are leaked to prod, team members can blame the QA, trust can be impacted, and QA processes are put into doubt, even if the QA tested can be put into doubt… however, did anyone occur to think potential reasons why the bug was leaked?

  • Were last-minute requirements a factor?
  • Was the QA’s input considered when mentioning the impact on the quality of these new changes?
  • Was the testing rushed?
  • Was the fix deployed late to the QA team?
  • Were the requirements clear enough?
  • Was the refinement clear enough?
  • Was it enough time to build a proper test plan?

I can spend hours talking about this topic; it is very interesting to discuss. Something that I have noticed is that this treatment towards QA engineers tends to occur when PMs have not too much knowledge about software and its life cycle, developers have a low orientation to quality, or developers are seniors and QAs are juniors and they make very junior mistakes, which reduce credibility in their jobs. The good thing is that I see this less and less, and the more you progress in companies that hire more senior positions, the less this treatment is (I hope), also developers are getting very good at quality and testing which leads to more appreciation to the QA processes, and therefore, to QA Engineers.

--

--

Karla Mieses

QA Automation Engineer | ISTQB Certificate | Software QA best practice, Cypress and UX advocate.