October 14, 2015 By Diana Kelley 3 min read

The other night on “The Big Bang Theory,” the character Sheldon referenced Archimedes: “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.” In other words, with the right tool (the lever) and support (the fulcrum), anything can be accomplished. But when it comes to solving our cybersecurity problems, things don’t always seem doable.

A big part of the problem is people. What’s the “hardest recruiting that there is on the planet today”? According to United States Chief Information Officer Tony Scott, it’s finding people with cybersecurity skills. And even if your company can find the right people, it’s a good bet they’re not going to come cheap.

This is especially true in the case of application security and testing. But while the application security skills gap widens, the need for better and faster testing of software has never been greater. Apps power our cars, our medical devices and our energy grids, and they house plenty of personal information.

We know we need to test our software for abuse cases and coding flaws, but we can’t find or afford the talent to do it. We don’t have the right lever, we don’t have the right support and we don’t have enough people with the talent to move that lever. While there is no way to replace an experienced, highly skilled application penetration tester, there are parts of the application security-testing puzzle that can benefit from the automation lever.

Learn How to Effectively Manage Application Security Risk in the Cloud

Let’s take a look at where and how automated application security testing can be leveraged to address the cybersecurity skills gap.

Early

It’s almost a truism in the industry at this point: The earlier in the software development life cycle that an error or exposure can be identified, the faster and cheaper it will be to eradicate it. This means building security into development during the requirement definition and architecture phases.

It also means catching coding errors as early as possible — preferably during implementation. Testing tools that integrate with the build process can provide developers with mission-critical early warnings on where and how their software is vulnerable. Additionally, smart tools that provide suggestions on how to fix the errors mean developers can remediate issues quickly and get back to work on functionality.

Easily

Not only is it hard to find the right skill set for app testing, it can also be challenging to find budget and resources to even set up the infrastructure for it. Very often, application testing requires separate servers, software and admins to configure and manage the systems.

Tuning software security testing tools takes skill, and few developers have the time or support to acquire those skills. But with a smart, cloud-based testing tool, developers don’t need to be experts in security testing, and companies don’t need to invest in additional hardware.

Quickly

Whether your company is going agile or is post-DevOps, it’s a good bet that your development cycles are a lot faster than they were a decade ago. That means your security software testing has to be faster, too.

Developers need tools that can keep up with accelerated delivery schedules, such as the ability to upload code or applications to the cloud at night and have a full test report waiting the next morning. For even tighter turnaround times, teams can integrate cloud-based software security testing directly into the build environment.

About Those Pen Testers

You’ve built security in, the dev team is humming along with automated application and security testing in the cloud and updates and functionality are being pushed out every couple of days. Application security problem solved, right?

Well, not quite. As important and valuable as automated, cloud-based application security testing is, it’s not going to catch every problem — yet (but check back with us in a few years after we’ve expanded cognitive technologies in application security testing). Organizations still need to employ top-notch penetration testers to assess applications before launch and in production.

Automated application security testing isn’t about replacing human pen testers: It’s about spending scarce resource dollars as effectively as possible. Let the easy-to-recruit employees and cloud-based tests catch the vulnerabilities and errors that can be remediated early in the process. That way, the expensive, hard-to-recruit human testers can concentrate on finding truly complex problems.

Automation is your lever and the cloud is your fulcrum — now go move the application security testing world.

To Learn More

To test-drive IBM Application Security on Cloud for yourself, register now for our complimentary trial.

Learn How to Effectively Manage Application Security Risk in the Cloud

More from Application Security

PixPirate: The Brazilian financial malware you can’t see

10 min read - Malicious software always aims to stay hidden, making itself invisible so the victims can’t detect it. The constantly mutating PixPirate malware has taken that strategy to a new extreme. PixPirate is a sophisticated financial remote access trojan (RAT) malware that heavily utilizes anti-research techniques. This malware’s infection vector is based on two malicious apps: a downloader and a droppee. Operating together, these two apps communicate with each other to execute the fraud. So far, IBM Trusteer researchers have observed this…

From federation to fabric: IAM’s evolution

15 min read - In the modern day, we’ve come to expect that our various applications can share our identity information with one another. Most of our core systems federate seamlessly and bi-directionally. This means that you can quite easily register and log in to a given service with the user account from another service or even invert that process (technically possible, not always advisable). But what is the next step in our evolution towards greater interoperability between our applications, services and systems?Identity and…

Audio-jacking: Using generative AI to distort live audio transactions

7 min read - The rise of generative AI, including text-to-image, text-to-speech and large language models (LLMs), has significantly changed our work and personal lives. While these advancements offer many benefits, they have also presented new challenges and risks. Specifically, there has been an increase in threat actors who attempt to exploit large language models to create phishing emails and use generative AI, like fake voices, to scam people. We recently published research showcasing how adversaries could hypnotize LLMs to serve nefarious purposes simply…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today