November 14, 2016 By David Stewart 4 min read

This is the second and final installment in a two-part series covering application security testing. Be sure to read Part 1 to learn how to be agile with your Agile adoption.

To get the most from our automation and DevOps tooling, we need to utilize a series of secure quality gates that test software for acceptable levels of quality and security risk, depending on what stage we’re at in the feature’s development life cycle.

Four Gates of Application Security Testing

At each gate, if automation discovers and reports that the software doesn’t pass predefined quality and security criteria, we need to immediately inspect and adapt our development to triage and address any issues. Only then can we move past the gate and continue with feature implementation.

1. The Development Gate

The first gate stands at the developer level. We require every new piece of functionality — whether part of a user story, feature request or defect fix — to be accompanied by a unit test to serve as a development-level gate. These tests run automatically with every Jenkins build of the software. Application security testing takes place at many times throughout the day. Developers also frequently run ad-hoc tests on their desktops as they write new code.

The goal of this gate is make sure the code is functional. It’s not about testing user story functionality or acceptance — the first gate is simply focused on ensuring the code logic functions correctly. The companion security component in the first gate involves running incremental static application security testing (SAST) analysis of new source code.

When conducting the SAST analysis, we look for certain types of issues and vulnerabilities. For example, if the feature involves data flowing from user inputs to a database, we’ll flag any high-severity input validation vulnerabilities, such as cross-site scripting (XSS) or SQL injection. These issues can often be identified early in development before the entire feature is completed. Any such defects found during the automated analysis must be fixed before the developer can proceed with the next steps.

2. The Quality Assurance Gate

The second gate stands at the quality assurance (QA) and nightly build level. Each night, we produce a QA-ready build that testers will use the following day to review and test any partial features that are ready, using a fully installed build. For this gate, in addition to automated unit testing, we conduct more thorough system/functional tests, which are also completely automated.

Our overall goal is to deliver something truly usable to QA every day. A feature may not be complete, but testers can install and run a selected set of tests based on where we are in user story implementation. We also run a SAST scan of the product’s source code in conjunction with dynamic application security testing (DAST) analysis of our test web servers.

At this gate, we’ve configured our Jenkins build to fail if a certain threshold of defect/vulnerability levels and types is hit. Typically, we’re looking for incorrect feature behaviors. In security scans, we’re looking for true positives that are marked with a high confidence of possible exploitability by attackers.

When a build fails based on these criteria, the right people on the scrum team are immediately notified via automated messaging. They can then begin working to triage defects and resolve the issues so that a new build can be started. We begin to accumulate a trend graph at this gate so that the team has clear visibility into ongoing security or quality issues. This enables the team to see how the quality of its code and security is hopefully improving day-to-day, sprint-to-sprint and year-to-year.

3. Production-Ready Build

The third gate stands at our weekly production-ready build. We typically work in three-week release iterations, but the scrum team has a goal to validate that the source base is ready for release at least once a week throughout the sprint.

At this gate, we require that no in-progress code be functional in the source — using build-time and run-time switches to hide in-progress work — and execute an extensive, entirely automated run of unit, system and security tests on the entire product.

Here, we’re looking for problems or vulnerabilities that need to be triaged to conduct accurate risk assessments and to determine whether the product should be delivered as is. We look at all security vulnerabilities, including those lingering in our backlog that may have been deferred, and perform assessments of efforts to resolve and/or close out any remaining defects and minimize rolling technical debt.

4. Ready-to-Ship Build

The final gate comes at the end of the iteration with our ready-to-ship build. At this point, all hands are finally on deck — everyone is a tester and everyone spends time simply using the product. We run through acceptance tests from specific scenarios and user points of view to verify that user stories are complete.

We even assign a couple of “dumb users” (usually managers) to use the product as though it were the very first time. To keep it real, these users pretend to have no interest in reading any product documentation. This permits us to determine whether we’ve missed the forest for the trees after having been so deeply immersed in implementation throughout our sprint. We get the features in front of stakeholders and product owners to walk through our implementation to make sure that no incorrect assumptions have been made and all expectations and acceptance test expectations have been met.

The Value of Sprint Automation

We’re able to accomplish this because we’ve invested in automating the process, and we rely on automation regularly and iteratively throughout the sprint. There’s nothing unique about our approach to the final gate. Most scrum teams ramp up testing and call “pens down” on their developers toward the end of their sprints.

That’s all great, but such an approach frequently results in a mad scramble that unearths last-minute surprises and leads to less secure and lower-quality code being formally released. It can even prompt decisions to defer feature implementations from one release to the next.

You can make that final production push far more efficient by utilizing your DevOps system with an iteratively progressive testing and gating approach. This can also help you to achieve a high level of quality and security with every release of your software.

Forrester Research Report: Secure Apps at the Speed of DevOps

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today