Security is another dimension of quality

Christopher Blanco
9 min readNov 2, 2021

In the cybersecurity industry, the term “DevSecOps” is touted as the best way to integrate security best-practices into your company’s software development pipeline. The term seems to get interpreted literally, however, since discussions around its implementation focus on ways to bring in new people and tools to identify specific vulnerabilities in code. This leads to ineffective efforts because security issues are frequently caused by bad design decisions before code was even written. When you step back from thinking about the “Sec” element as purely about catching code vulnerabilities, you have more flexibility in how to tackle the problem. In this post, I’ll share why I think it’s valuable to treat security as an extension of existing quality assurance efforts, because fundamentally QA protects against similar risks through similar means.

Let me start with what led me to write about this in the first place.

I was a founding member of my last company’s security efforts. The company was a B2C whose primary products were streaming devices, mobile and web applications linked to a common backend for engaging with content. When drafting a report arguing for more security operations personnel, I came to realize that the gross number of incidents impacting our online services were due not to attackers exploiting vulnerabilities or even abusing us with DDoS attacks, but from organic traffic growth and our own dumb selves doing inadequate testing.

How did I classify these incidents? I created 4 categories and applied them to all incident reports we’d recorded (which we’d been in the practice of doing consistently for years by this time):

  • Abuse (someone’s misused the service to the detriment of us and/or others; DoS attacks fit here)
  • Fraud (someone’s claimed ownership of an asset to gain illicit access to privileged good; most often credit card fraud)
  • Piracy (someone’s gained access to content to which they are not entitled)
  • Hacking (someone’s exploited a vulnerability in our software for any number of outcomes; breaches fit here)
Graph of the rate of occurrence vs financial impact of the incidents I was observing. Abuse has a tall, wide curve towards the left end of the financial impact axis and is highest along the rate of occurrence axis. Fraud is generally a little more expensive in impact but less frequent and doesn’t get very expensive. Piracy starts more expensive but is still less frequent. Hacking has a wide range of cost and can be very expensive, but has almost no instances of occurrence.
Representation of the rate of occurrence vs. financial impact I was observing

Notice how only 1 of the 4 above actually deals with exploitation of software vulnerabilities introduced by developers? The rest can certainly impact the CIA (Confidentiality, Integrity, and Availability) of an asset, but were more often caused by a design mistake as opposed to some injection vector. By raw incident count, the majority of our issues were in the Abuse bucket. By cost of incident, well, none of the above had resulted in a breach, and while Hacking-type issues existed, they were primarily scoped to attackers reverse-engineering our client apps to prevent the loading of ads (shown to our users not on a subscription); the distribution of these modified application packages was hardly measurable across the total userbase, so there was negligible impact. I quickly began to realize that while I personally felt a desperate need to bring in more security expertise to ward off dangerous attacks, that sort of investment just didn’t have good ROI.

Illustration of a basic software development cycle, with stages colored by the function primarily responsible for its execution.
The same development cycle, but featuring QA partnering with developers in the Design stage.

Our QA teams however were already testing for 3/4 issues above and getting involved upstream in the design phase to prevent further issues. None of them were trained in cyber security. None of them were using scanners, fuzzers, or applying sophisticated penetration testing techniques to identify vulnerabilities (until somewhat recently). However, they knew how to recognize dangerous impacts to sensitive assets, and they were getting involved earlier in the development cycle (“shifting left”, or getting involved earlier in the development cycle) and applying that mindset to catch these problems before developers began writing code. In case you’re still wondering why I think this is relevant, consider some of the following scenarios they were seeking to prevent:

  • Does the app fail to scale to meet demand that spikes upon launch of a new product or feature?
  • Does the app prevent a forum troll from finding a way to bypass automated moderation and spam threads with offensive imagery?
  • Can a user use a public API to stitch together a complete picture of all personal data you have on your users?

All of the above in my opinion blur the line between what’s a quality assurance issue and a cybersecurity issue. I think all of you reading this from the security space will recognize the value your security engineers and analysts could bring in identifying these issues, but when you look at talk of DevSecOps, it is filled with focus on those Hacking incidents: high cost, high impact for sure, but for most companies probably have the least risk of occurring and causing your company financial impact. I think it’s helpful to embrace how closely QA and security fit together, as their concerns are so often the same.

But enough with anecdotes. Let’s talk about details that illustrate why I think this angle is useful.

Adding security tests as a part of QA is less scary to pitch. What does adding a new function to your pipeline sound like to most senior managers and executives? Additional operating expense. It’s also compounded by risk, since they’ll need to bring in experienced managers to keep this fresh set of operations grounded and productive. Furthermore, there’s already a lot of fear and uncertainty around cybersecurity, if the news is any indication. Extending what you already have is a lot less scary than embracing and incorporating the unknown.

QA workflows are already quite mature. Did you notice how in the history of DevOps link above, there wasn’t a preceding stage of “DevQA”? Quality Assurance has been around for a very long time and has developed a multitude of ways to tie itself into development workflows and measure their success. Your QA teams probably already have a means of classifying the severity of bugs they have found, and your security vulnerabilities may as well co-opt or extend these. Moreover, mature QA teams are seeking to “shift left” just like security teams are, for the same reason: efficiency.

Workforce constraints exist. (ISC)² publishes annual reports on the state of the cybersecurity workforce, and their latest findings indicate a gap of 2.72 million professionals needed vs. available to cover all outstanding needs. So, while “DevSecOps” has an intent in bringing the tools and skills of security professionals to bear in the development pipeline to break down barriers and improve overall efficiency, attempts to implement it by scaling up your security org will deadlock you from the start since you won’t be able to hire. Moreover, if security conference topics are a good indicator, many companies are struggling to adapt the mindsets and skills of their existing security professionals to work in a software development team using an Agile workflow. You’ll have an easier time showing the people you already have some cool new things to consider (which I’ll go over in more detail in an upcoming post).

Overall, I think it’s risky to think of “DevSecOps” as simply about adding security professionals and their tools to your software development pipeline. It’s a costly endeavor with operational hurdles beyond hiring and paying for software licenses, and it may only focus on a fraction of the issues causing your company financial loss. Instead, I think it’s freeing to focus on how quality assurance efforts can be extended to cover security vulnerabilities, and to join them in thinking about what could go wrong at the design phase before there’s even code to run a scanner on. At the very least, you’ll be inspiring others to consider security concerns as they contribute to your company’s software developments, and I believe a little extra awareness is always a good thing.

Takeaways

If you’re a manager or influencer in your company working with others in shipping software, here’s what you can do next from here.

  • Look at who you already have in QA and see if any of them were already looking into security-related thingies. If so, hang out and start dropping links. For manual testing, check out the OWASP Top 10. For SDET types already deeply embedded into development teams, consider sharing https://owasp.org/www-project-webgoat/.
  • Joining QA in “shifting left” is useful. Risk and privacy assessments required by data privacy laws will force you to do it anyway. The types of incidents most frequently occurring are design-related ones (like broken access control), and asking the right questions early is cheaper than using expensive tools to discover them once the vulnerabilities have already been implemented.
  • Static analysis is still useful, and doesn’t need to be expensive! Your developers and QA engineers have probably tried out any number of CI tools to detect and report on findings at different stages of development. Adding use of gosec or GitHub ‘s Dependabot into your development and deployment workflows can be effectively free and still can cover some of the most common vulnerabilities in software development. You don’t need security engineers to respond to the findings these things present.
  • Don’t treat security incidents in your apps differently from other incidents (unless maybe it was an internal actor with malicious intent, in which case extra confidentiality makes sense). Remember, part of the goal here is to stop treating security differently. The moment you do this, you invite all the issues above into your management team’s considerations. Avoid doing this until you know there’s a gap that your people and processes can’t fill. In any case, these incidents all have a cost to the company, and all it’ll be easier to explain those costs using existing terms.

Up Next

As mentioned above, I next want to explore in more detail how to most effectively identify and prevent security issues in your development lifecycle. In this sense, security becomes a creative discipline, which I don’t think is talked about often enough. I hope you’ll find it interesting.

Notes

  • I think understanding the history of DevOps is important in providing context for DevSecOps, but perhaps not necessary to understand the points above. DevOps, its immediate predecessor in the evolution of software development lifecycles, had very good reasons for identifying itself as it does now. Ops teams would traditionally monitor, support, and generally operate the software developed by other engineering teams, often without much help by these creators. They were often left outside the development cycle, and in some cases were literally part of a separate organization. This often created animosity and prevented developers from getting input early on how to prevent issues in their applications once they were deployed at scale. So, the solution discussed by the founders of DevOps was essentially to combine forces and build better products together. Now, in my experience, this resulted in organizations stretching themselves to put at least one Operations-type person into every software development team they had. This quickly became an expensive problem, as internally and externally there are just not as many folks skilled in the right areas as there are software development teams. Site Reliability Engineers (SREs), a variant of the “Ops” function, are facing the same problem, making the definers of the role rethink how they should be realistically organized to support broader efforts.
  • OWASP already has v2 of the “SAMM model” which is “the prime maturity model for software assurance that provides an effective and measurable way for all types of organizations to analyze and improve their software security posture.” I discovered this when writing this post. It’s remarkably similar in its goals! Where I didn’t have the time to outline an entire framework for implementation, measurement and progression, they did and have been doing so for awhile now. Check it out.
  • Am I trying to suggest that there’s no value in having a dedicated security team? Of course not. The focus of this article is on security as an aspect of your software development efforts. Security professionals know that one of the most common threats to any company still comes in the form of phishing and, lately, insider threats (both intentional and not). The skills, technologies, people and assets involved when defending against these threats however are very different from those in software development, and so in my mind should be handled differently.
  • I don’t go into it here in detail, but artifacts used when talking about effective vulnerability management typically illustrate what asset is impacted, what attribute(s) is affected, and some sense of how severe the impact could be of this coming to fruition (along with hopefully some sense of how likely this is to occur). An implicit point I’m making here is that QA processes already do much of the same with their bug classifications in order to help software development teams prioritize fixes over other efforts. Inserting your findings through existing QA processes should cause less friction than how I’ve seen separate security teams handle vulnerability management so far. Moreover, if there is already a horde of data from QA about the bugs an existing team is getting hit with, perhaps this is a good signal for where you have greater risk of vulnerabilities being introduced, too!

--

--

Christopher Blanco

Frequently concerned software engineer with an interest in cybersecurity and a deep interest in people, art, organization, and keeping things simple.