This is the second and final installment in a two-part series covering application security testing. Be sure to read Part 1 to learn how to be agile with your Agile adoption.

To get the most from our automation and DevOps tooling, we need to utilize a series of secure quality gates that test software for acceptable levels of quality and security risk, depending on what stage we’re at in the feature’s development life cycle.

Four Gates of Application Security Testing

At each gate, if automation discovers and reports that the software doesn’t pass predefined quality and security criteria, we need to immediately inspect and adapt our development to triage and address any issues. Only then can we move past the gate and continue with feature implementation.

1. The Development Gate

The first gate stands at the developer level. We require every new piece of functionality — whether part of a user story, feature request or defect fix — to be accompanied by a unit test to serve as a development-level gate. These tests run automatically with every Jenkins build of the software. Application security testing takes place at many times throughout the day. Developers also frequently run ad-hoc tests on their desktops as they write new code.

The goal of this gate is make sure the code is functional. It’s not about testing user story functionality or acceptance — the first gate is simply focused on ensuring the code logic functions correctly. The companion security component in the first gate involves running incremental static application security testing (SAST) analysis of new source code.

When conducting the SAST analysis, we look for certain types of issues and vulnerabilities. For example, if the feature involves data flowing from user inputs to a database, we’ll flag any high-severity input validation vulnerabilities, such as cross-site scripting (XSS) or SQL injection. These issues can often be identified early in development before the entire feature is completed. Any such defects found during the automated analysis must be fixed before the developer can proceed with the next steps.

2. The Quality Assurance Gate

The second gate stands at the quality assurance (QA) and nightly build level. Each night, we produce a QA-ready build that testers will use the following day to review and test any partial features that are ready, using a fully installed build. For this gate, in addition to automated unit testing, we conduct more thorough system/functional tests, which are also completely automated.

Our overall goal is to deliver something truly usable to QA every day. A feature may not be complete, but testers can install and run a selected set of tests based on where we are in user story implementation. We also run a SAST scan of the product’s source code in conjunction with dynamic application security testing (DAST) analysis of our test web servers.

At this gate, we’ve configured our Jenkins build to fail if a certain threshold of defect/vulnerability levels and types is hit. Typically, we’re looking for incorrect feature behaviors. In security scans, we’re looking for true positives that are marked with a high confidence of possible exploitability by attackers.

When a build fails based on these criteria, the right people on the scrum team are immediately notified via automated messaging. They can then begin working to triage defects and resolve the issues so that a new build can be started. We begin to accumulate a trend graph at this gate so that the team has clear visibility into ongoing security or quality issues. This enables the team to see how the quality of its code and security is hopefully improving day-to-day, sprint-to-sprint and year-to-year.

3. Production-Ready Build

The third gate stands at our weekly production-ready build. We typically work in three-week release iterations, but the scrum team has a goal to validate that the source base is ready for release at least once a week throughout the sprint.

At this gate, we require that no in-progress code be functional in the source — using build-time and run-time switches to hide in-progress work — and execute an extensive, entirely automated run of unit, system and security tests on the entire product.

Here, we’re looking for problems or vulnerabilities that need to be triaged to conduct accurate risk assessments and to determine whether the product should be delivered as is. We look at all security vulnerabilities, including those lingering in our backlog that may have been deferred, and perform assessments of efforts to resolve and/or close out any remaining defects and minimize rolling technical debt.

4. Ready-to-Ship Build

The final gate comes at the end of the iteration with our ready-to-ship build. At this point, all hands are finally on deck — everyone is a tester and everyone spends time simply using the product. We run through acceptance tests from specific scenarios and user points of view to verify that user stories are complete.

We even assign a couple of “dumb users” (usually managers) to use the product as though it were the very first time. To keep it real, these users pretend to have no interest in reading any product documentation. This permits us to determine whether we’ve missed the forest for the trees after having been so deeply immersed in implementation throughout our sprint. We get the features in front of stakeholders and product owners to walk through our implementation to make sure that no incorrect assumptions have been made and all expectations and acceptance test expectations have been met.

The Value of Sprint Automation

We’re able to accomplish this because we’ve invested in automating the process, and we rely on automation regularly and iteratively throughout the sprint. There’s nothing unique about our approach to the final gate. Most scrum teams ramp up testing and call “pens down” on their developers toward the end of their sprints.

That’s all great, but such an approach frequently results in a mad scramble that unearths last-minute surprises and leads to less secure and lower-quality code being formally released. It can even prompt decisions to defer feature implementations from one release to the next.

You can make that final production push far more efficient by utilizing your DevOps system with an iteratively progressive testing and gating approach. This can also help you to achieve a high level of quality and security with every release of your software.

Forrester Research Report: Secure Apps at the Speed of DevOps

More from Application Security

Gozi strikes again, targeting banks, cryptocurrency and more

3 min read - In the world of cybercrime, malware plays a prominent role. One such malware, Gozi, emerged in 2006 as Gozi CRM, also known as CRM or Papras. Initially offered as a crime-as-a-service (CaaS) platform called 76Service, Gozi quickly gained notoriety for its advanced capabilities. Over time, Gozi underwent a significant transformation and became associated with other malware strains, such as Ursnif (Snifula) and Vawtrak/Neverquest. Now, in a recent campaign, Gozi has set its sights on banks, financial services and cryptocurrency platforms,…

Vulnerability management, its impact and threat modeling methodologies

7 min read - Vulnerability management is a security practice designed to avoid events that could potentially harm an organization. It is a regular ongoing process that identifies, assesses, and manages vulnerabilities across all the components of an IT ecosystem. Cybersecurity is one of the major priorities many organizations struggle to stay on top of. There is a huge increase in the number of cyberattacks carried out by cybercriminals to steal valuable information from businesses. Hence to encounter these attacks, organizations are now focusing…

X-Force releases detection & response framework for managed file transfer software

5 min read - How AI can help defenders scale detection guidance for enterprise software tools If we look back at mass exploitation events that shook the security industry like Log4j, Atlassian, and Microsoft Exchange when these solutions were actively being exploited by attackers, the exploits may have been associated with a different CVE, but the detection and response guidance being released by the various security vendors had many similarities (e.g., Log4shell vs. Log4j2 vs. MOVEit vs. Spring4Shell vs. Microsoft Exchange vs. ProxyShell vs.…

Unmasking hypnotized AI: The hidden risks of large language models

11 min read - The emergence of Large Language Models (LLMs) is redefining how cybersecurity teams and cybercriminals operate. As security teams leverage the capabilities of generative AI to bring more simplicity and speed into their operations, it's important we recognize that cybercriminals are seeking the same benefits. LLMs are a new type of attack surface poised to make certain types of attacks easier, more cost-effective, and even more persistent. In a bid to explore security risks posed by these innovations, we attempted to…