The Automation Utopia is a Mirage

ivy šŸ§šāœØ
Misplaced Musing
Published in
11 min readDec 22, 2023

--

Letā€™s get real about the stereotypes that consequentially result in shitty software, disjointed teams and precious buildersā€™ time wasted.

In December 2021, I embarked on a pivotal journey as the Head of Test at a cutting-edge blockchain startup. Tasked with shaping the vision and providing executive leadership for the automated testing strategy of their web and mobile applications, my mandate extended to ensuring the robust health and stability of the CI/CD pipeline.

The overarching objective was ambitious ā€” to forge an industry-leading automation engine that never falters, delivering top-tier software for developers to navigate with unprecedented velocity. Notably, the responsibility framework deliberately omitted explicit reference to Quality Assurance (QA) or any manual participation in producing quality software.

The allure of an environment where developers can unfetteredly push software, unbridled by fears of defects and immune to disruptions in their pace, exclusively powered by an automated pipeline of stable and high-fidelity testing is enticing. But the stark reality for companies valuing both internal developer experience and external user experience (UX) is that such a utopian concept, while cute to imagine, can never ā€” and perhaps more provocatively, should never ā€” exist.

Peeling back the layers reveals the profound value of QA, often oversimplified as ā€˜manual testingā€™ of software. It dismantles the idealistic vision of achieving high-quality software solely through automation, challenging industry biases and pushing for a more holistic understanding of QAā€™s integral role.

I initiated my professional journey at Apple, entering as a Quality Assurance individual contributor after setting lofty and annoyingly limited goals to secure a SWE position in a FAANG companyā€¦and then constantly choking on the algorithm interviews (as one does with massive performance anxiety).

ā€œThis is an easier entry-point to ultimately becoming a software engineerā€ I rationalized, masking the perverted feelings of rejection and defeat. Venturing into the tech industry through the lens of a QA engineer left me grappling with a sense of failure. That vivid image of the enthusiastic Software Engineer I aspired to be clashed with the, in my mind transient, role of QA I found myself in. My commitment to leveraging this role as a stepping stone towards the more profound and respected software engineering position deepened and soothed my shame.

Curiously, the stigma associated with being a QA professional naturally existed despite no prior experience in the discipline.

The genesis of this seemingly innate bias surrounding QA isnā€™t rocket science. Letā€™s lean into the devil bias to illustrate how easily this can unfold: a single negative perception within the QA discipline can cast a wide shadow, leading individuals to make overarching judgments about the entire field. Take, for instance, the common association of QA with manual processes, often deemed slower than automation. In the fast-paced world where meeting the demands of eager, enthusiastic users and staying ahead of the competition is paramount, this perceived slowness becomes the very adversary we must urgently overcome. Speed in our processes becomes not just a preference but a strategic imperative for success.

Why would I want to join Apple as the very thing compromising its success?

When decisions are influenced by these misconceptions, we inadvertently sacrifice a thoughtful balance between high-velocity progress and prioritizing developer speed while upholding top-notch software quality for users. Itā€™s unjustified, yet the propensity to make sweeping generalizations is remarkably facile. What a paradox ā€” the intentional practice of thoughtfulness, requiring time and consideration, frequently takes a backseat to the misguided prioritization of velocity, propelled by these ingrained biases.

What are these QA misconceptions reinforcing biases that end up sabotaging our internal and external experiences, slowing everyone down while still producing buggy-code even with āœØautomationāœØ?

From my experience as an entry-level QA at Apple, a founding QA engineering leader at Robinhood, and the Head of Test at OpenSea, hereā€™s my list of the major QA misconceptions compromising the velocity of the entire Software Development Lifecycle (SDLC) and leading to buggier code with lower-quality experiences for users:

šŸ’„ QA equates exclusively to manual testing

Iā€™m not going to talk about how these QA experts, oftentimes diluted to simple manual testers of software, can also wear SDET hats with impactful automation potential that take responsibility away from the builders and enable their focus on high-speed output without the personal thoughtfulness and accountability of their code-quality (itā€™s important to note Iā€™m genuinely and fervently against the manual-QA-to-SDET-for-career-ā€advancementā€ pipelineā€” Iā€™ll get into my opinions around that another time).

To reflect more accurately the biases that severely minimizes QA into manual testersā€” typically simplified even more dramatically into ensuring the companyā€™s shipped features, living in a more stable state of maintenance with subtle updates, enhancements or bug-fixes, remain stable through predictable regression testing (obviously this devalues QA, repetitive testing like this requires automation otherwise a massive waste of human ingenuity)ā€” itā€™s valuable to connect an analogy of Product Management, a discipline that demands inherent and ubiquitous respect in the industry.

ChatGPT describes product management as:

ā€¦overseeing the entire lifecycle of a product, from concept to delivery. It involves understanding customer needs, defining a clear vision for the product, and collaborating with cross-functional teams like engineering and marketing to ensure its successful development and launch. Product managers act as the bridge between business strategy, user experience, and technical implementation to create products that meet both customer and business objectives.

ā€¦which can also live as an accurate way to describe the complexities of QA who similarly obsess, not over the product fit in the market and to the target user demographic like product managers, but the quality of that software and the sentiment of the softwareā€™s users. For QA to effectively uphold these responsibilities, this role necessitates a deep understanding of the productā€™s diverse spectrum of customers and a nimble operational strength that fuses the shared championship of quality, again for ease of the user experience, across multiple stakeholders. QA manages this obsession of quality by bonding (1) Design and Product Management teamsā€™ prototyping of the idea all the way through implementation of their vision to deliver the intended experiences to the users; (2) Engineering teamsā€™ requirements from Design and Product to build those features and ensure with highest certainty code-quality throughout the SDLC; (3) Customer Experience and Marketing teamsā€™ stability when handling the rollout and potential friction users may experience onboarding to the product; and providing these cross-functional teams the mentorship and tooling to properly steward quality themselves at each stage of the entire feature lifecycle, through and continuously after shipping to their users.

Ģ¶PĢ¶rĢ¶oĢ¶dĢ¶uĢ¶cĢ¶tĢ¶ Quality managers act as the bridge between business strategy, user experience, and technical implementation to create products that meet both customer and business objectives.

The QA engineer has a stocked closet full of varying hats and are quick to swap out the worn fashion for whatever the feature-state requires with a strong interdisciplinary intuition to determine when that outfit change is critical. They test, yes, but their operational strategy requires a strong pulse on continuous quality of code and overall product experience for the usersā€™ happiness (and ultimately the business bottom-line) entails so much more skill than the biased ā€œmanual testingā€ label denotes.

Itā€™s also important to note there are some archaic organizations that focus solely on offshore QA, where teams merely verify existing feature quality by repetitively going through the same testing processes to ensure nothing has regressed. When I mention QA, Iā€™m not talking about this particular approach.

šŸ’„ The simplicities of manual testing can be fully replaced by automated testing

Though already detailing an argument around QA bringing more depth and value to an organization beyond manual testing, there does still exist an element of testing that requires an ingenious human professional to manually ideate complex coverage representing a productā€™s user and device configuration properly, execute that coverage to ensure the ideal user experience upon launch and drive the optimal strategy to maintain ongoing stability of that user experience following launch. This is manual in nature, but the industryā€™s obsession with automating everything ignores an immensely valuable market strength when obsessing over your user.

When prioritizing the quality of a completely new feature, where QA thrive and add the most value, QA must embrace ideating and prioritizing coverage of the diverse and complicated matrix of usersā€™ platforms, devices, countries, identities, proficiencies, and interests. This should be done while upholding a meticulous and watchful attention to safety and security for the companyā€™s diverse user base ā€” uniquely catering to and addressing the specific challenges posed by the complexity and scale of each new feature.

QA valuably ideate this massive test coverage to comprehensively represent the companyā€™s users and diligently dig to expose hidden and convoluted edge cases that donā€™t obviously exist without detective work. This responsibility, requiring that human QA to exert manual work, is clever in reaching the ultimate coverage strategy to sufficiently deliver a seamless UX. This strategy eventually prioritizes an efficient and thoughtful balance of automation and manual coverage, contingent on the complexity of the experience, without compromising either the launch date or the quality on launch. Without that human element the company willingly embraces an ignorance to the user and their complex matrix of usage, sparking friction to in the best case some isolated experiences and in the worst case entire classes of user typesā€” all damaging the usersā€™ sentiment toward the product and company.

Even this unrealistic automation utopia, in the devilā€™s advocate argument of a hypothetical organization having unlimited resourcing to achieve an idealistic state of expansive automation coverage running with stability and speed both locally and in CI/CD, someone must own leading the overall testing strategy to sufficiently advocate for the end usersā€™ quality experiences with developer velocity consideration. The strength of QA engineering knowing the product and business requirements, bridging the working teams and championing a culture of quality, and advocating for the user they know almost personally remains a value-add.

Itā€™s moot to argue in any case, this utopia doesnā€™t and shouldnā€™t exist.

šŸ’„ QA generally compromises velocity

The pitfalls Iā€™ve observed in companies over-indexing on automation being the solution to both overall feature shipping-speed and sustained product quality highlight an incredibly complicated approach to ownership: automated test writing ownership and automated test maintenance ownership. Without intelligent strategies for both, organizations are left with flaky and fragile automation and slow local or remote developer experiences that disrupts the entire companyā€™s velocity. Assigning the automation implementation responsibility to any other separate internal team or external offshore team than the owning team who coded the original product requirement leaves a difficult-to-prevent knowledge gap from hand-off and onboarding disruption. Assigning the automated test maintenance once implemented by whoever, with the goal to keep the automated coverage high-signal and fast, to anyone other than the originally owning team ignites severely burdensome distractions that most always require interfacing with the owning team for guidance, costing substantial speed implications.

Letā€™s be clear: repetitive regression testing should always be automated across unit, integration and high-fidelity end-to-end testing and integrated in CI/CD to catch and block any obvious regressions from deploying to users. However, to prioritize even minimal automated regression coverage of a productā€™s most critical user experiences across the entire product surface of the supported platforms requires deliberate upfront engineering requirement planning and substantial development dedication during the SDLC to bring those requirements to fruition before launch.

Even if the cost of requirement-planning is acceptable, the resourcing alone for a team of developers to implement those automated test cases, verify their successful signal and integrate into the productā€™s CI/CD (while ensuring an ease of running locally, not just remotely in CI) requires developers with product experience and infrastructure experience, both. I of course argue that for sustainable product quality and stability when moving quickly as an Engineering organization, the priority of this resourcing for regression coverage is essential ā€” but that allocation of resourcing to support a broad, complicated matrix of automated testing ultimately characterizing manual testing obsolete without any velocity impact is incredibly unrealistic, but also extremely risky and unpredictable.

Thereā€™s costly time and resourcing expenditure when over-engineering an excessive technical solution merely for the sake of having a technical solution as well as a massive user experience degradation when over-indexing on sole automated test coverage without integrating a thoughtful advocacy of the usersā€™ wants and needs when interacting with the product.

šŸ’„ QA technical competency is inadequate

Without applied Software Engineering skills and an ease when navigating complex algorithm challenges, QA often become categorized as non-technical individuals subsequently excluding them as genuine engineers. This breeds a toxic culture which separates Engineering and QA, leaving silos between the organizations which poisons the opportunity to integrate closely during the development process, cannibalizing speed of developer output while maintaining a productive quality concern. Without QA directly integrated into the SDLC alongside developers, naturally organizations fail to uphold the values of an Agile development methodology and mistakenly fall into a waterfall structure, handing finished pieces off to teams downstream. Waterfall development is a linear practice with less flexibility and adaptability, slowing teams down and costing overall speed (refer again to the busted velocity myth above).

Immersion within the Engineering organization fosters a deep technical competency that allows QA to excel in understanding intricate system architectures. This unlocks powerful integration closely within development teams, who can respect QA as true engineers speaking similar languages and working alongside to brainstorm comprehensively testing across the stack. QA utilize a profound understanding of the technical aspects of the product to ensure optimal functionality of the in-development feature.

Letā€™s use clearer examples of valued technical strength during the software development that QA can impact to save time while pushing for highest code quality. Prior to the user interface (UI) implemented for full end-to-end testing, QA can apply planned coverage on the database and API itself in isolation while encouraging which pieces should have automation coverage, making use of time during development without waiting for the entire product to complete before beginning their quality assessments. Or during defect reporting while QA tests new aspects of the in-development feature, QAā€™s technical strength can advise pin-pointing the exact problem within a developerā€™s code itself saving the developer time during the investigation and resolution of the defect.

This fiery technical proficiency gets smothered easily if QA becomes structurally isolated away from Engineering or even more subtly, culturally unacknowledged as an aspect of Engineering. The stereotype that QA just arenā€™t technical unfortunately fuels the smothering that organically devalues the discipline generally.

Devaluing QA unintentionally by over-indexing on automation being a faster and more effective solution compared to humans bringing their expert and creative advocacy for both the working interdisciplinary internal teams and external users the company serves only breeds unnecessarily complicated and inefficient internal operations that delivers degraded experiences to the population of customers whose sentiment directly correlates to a companyā€™s stability and success. QA remain pivotal product quality and user experts, stewards of communication and champions of the UX across all disciplines.

Unraveling the stereotypes highlights why, despite my own initial biases when starting my career as a Quality Assurance engineer, I swiftly recognized that Appleā€™s product success was intricately tied to an unwavering commitment to quality with pursuit to launch with products as close to perfection as possible. The sentiment around my role shifted from a mere stepping stone to a more significant software engineering position, transforming into an invaluable, influential force dedicated to ensuring user happiness and striving for product excellence. Now 6 years into my career, advocating for a priority of quality as a culture, bashing these toxic stereotypes and maintaining a realistic balance of speed during feature development, has remained an integral piece of my continued leadership.

This isnā€™t sustainable, though.

The damage from stereotyping generally is quite obvious: they wield a profound impact on individuals and are often rooted in biases and misinformation limiting opportunities and equity in team dynamics. They foster discrimination, exclusion, and unfair judgments which also disrupts a profound approach to valuable thoughtfulness and consideration during the process of prototyping, building and shipping high-quality experiences to the very users supporting your companyā€™s success.

The very pivotal QA engineers driving this unmatched concern that optimizes the UX undergo personal damage to their feeling of impact and value in an organizationā€” we risk ultimate burnout and attrition of what is a deeply valuable addition to any Engineering organization. Burnout, a pervasive and tough-to-remediate poison, needs itā€™s own attention and Iā€™ll dig into the destructive nature of this incredibly consequential reality that affects so many high-impacting, particularly support-facing, engineers.

Letā€™s stop prioritizing the allure of an all-encompassing automation utopia over artful advocates of optimal internal and external experiences, connecting your users more closely to your product.

--

--

ivy šŸ§šāœØ
Misplaced Musing

Disrupts hierarchy. Challenges the unconventional.