For years, developers and IT security teams have been at loggerheads. While developers feel security slows progress, security teams assert that developers sacrifice security priorities in their quest to accelerate production.

This disconnect results in flawed software that is vulnerable to attack. While advocates for speed and security clash, consumers must often pay the price when threat actors strike. 48% of developers admitted they were still shipping code with vulnerabilities in 2022.

It’s clearly time for a change. Many believe that “secure-by-design” is the way forward. But the big question remains: who is responsible for securing the code?

What is secure-by-design?

“Secure-by-design” is a software development approach that aims to create systems that are impenetrable to cyberattacks. As a core tenet of DevSecOps (development, security and operations), teams embrace secure design patterns from the initial conception. The goal is to create foundationally robust products free of flaws, weaknesses and vulnerabilities — instead of relying on patches or software updates.

However, “secure-by-design” is a theory, not a standard practice governed by any law. The reality of shipped software and hardware products is much different.

💡Related: Intro to DevSecOps: Why Integrated Security is Key in 2021

The dark reality of application security

Despite the aspirations for security by design, code is far from impervious. Even at major government organizations, like health and financial institutions, application security remains rife with vulnerabilities.

A December 2020 survey by the Linux Foundation revealed most open-source programmers prefer to improve tools, code new features and work on new ideas. Security is the last priority — with the average contributor spending less than 3% of their time securing their code.

And here’s the crux of the problem: Open-source code is everywhere. The 2023 OSSRA report by Synopsys reviewed codebases across 17 key industry sectors. The findings were bleak:

  • 96% of software applications contain open-source software components
  • 92% of codebases across all 17 industry sectors have open-source code
  • 84% of codebases contain at least one vulnerability.

Most worryingly, several sectors have open source in 100% of their databases — including Aerospace, Aviation, Automotive, Transportation, Logistics and the Internet of Things.

Why now is the time for secure-by-design

Poorly coded applications present more viable attack vectors for cyber criminals to exploit. At the level of small companies or consumers, attackers could steal personally identifiable information (PII) to take over accounts and run identity theft scams.

But for enterprises and government bodies, the potential consequences are much worse. If state-sponsored threat actors in Russia or China attack critical infrastructure such as the water systems, transport or banking services, it will impact millions of people across the United States.

That scenario has already come close to becoming a reality. In 2020, a Russian intelligence agency exploited a software vulnerability in SolarWinds to hack the State Department, the Pentagon, Homeland Security and dozens of federal agencies. But despite this wake-up call, the underlying problems persist.

A year after the SolarWinds attack, a federal cybersecurity evaluation found seven out of eight federal agencies failed to meet basic cybersecurity standards. Reliance on aging computer systems and outdated code was a problem across the board.

However, private companies are not doing much better. In December 2022, Google confirmed its ninth zero-day vulnerability that year, meaning some 3.2 billion Chrome users were at risk. It’s clear the threat isn’t going away.

Who is responsible for security in the development cycle?

Cybersecurity and Infrastructure Security Agency Director Jen Easterly addressed the ongoing frailties in a speech to Carnegie Mellon University:

“As we’ve integrated technology into nearly every facet in our lives, we’ve unwittingly come to accept as normal that such technology is dangerous by design.”

Easterly called for private industries to improve security in technology products, emphasizing their responsibility to end users.

“We’ve normalized the fact that the cybersecurity burden is placed disproportionately on the shoulders of consumers and small organizations who are often least aware of the threat and least capable of protecting themselves.”

As CISA pushes for more accountability from software companies, Easterly laid out some principles for companies to improve security by design:

  • The burden of safety should never fall on customers. Instead, the industry must take ownership of security outcomes.
  • Manufacturers must embrace radical transparency to disclose and help consumers better understand the scope of consumer safety challenges.
  • Tech industry leaders need to focus on building safe products and publish road maps explaining how they plan to develop and update secure-by-design technology.

By transforming the development process into one that puts security first, companies can spend less time fixing bugs and creating patches. Instead, software companies can invest more time in growth and innovation.

Left shift or culture shift?

Until now, security has been an afterthought, often reluctantly addressed at the end — or users or attackers discover flaws. The general consensus in the industry is that shifting security left in the software development life cycle (SDLC) is the solution. But that’s easier said than done.

Friction between development and security teams centers around the belief that the other department is also causing problems. This “us against them” mentality defeats the purpose of DevSecOps, leaving the teams to work in silos.

Before change can happen in the SDLC, there must be a cultural shift to bridge the gap between the teams.

Companies must do more to educate security professionals on the challenges developers face. By taking basic coding courses and working alongside programmers, security professionals can see how hard it is to meet security standards.

Breaking down the silos to get the perspective from the coder’s side will help the people governing the SDLC gain more empathy for developers. Similarly, as developers get more education on security, they can understand its importance and ROI. With proper coaching, developers can find ways to integrate security in a more seamless way.

CISA is calling on more universities to include security as a standard aspect of computer science coursework. By making security a key quality metric, the expectation is developers will begin taking it seriously.

Collaboration is the catalyst for change

Without secure code, applications remain at risk of attack. But the buck must stop with technology companies. Security teams and developers need to embrace the spirit of DevSecOps to deliver secure products capable of protecting sensitive data.

By adopting a collaborative culture that puts security top of mind from the start of the SDLC, companies can encourage old foes to come together to become the bastion of defense consumers need.

As development and security teams foster an ethos of “secure-by-design” and help each other achieve their goals, companies can ensure better code quality and security, ready to combat cyberattacks today and in the future.

Ready to start building a culture of security by design? The IBM Cloud DevSecOps Specialty helps cloud professionals understand the convergence of development, security and operations to seamlessly integrate security within a continuous delivery pipeline. You can start training today.

More from Risk Management

Working in the security clearance world: How security clearances impact jobs

2 min read - We recently published an article about the importance of security clearances for roles across various sectors, particularly those associated with national security and defense.But obtaining a clearance is only part of the journey. Maintaining and potentially expanding your clearance over time requires continued diligence and adherence to stringent guidelines.This brief explainer discusses the duration of security clearances, the recurring processes involved in maintaining them and possibilities for expansion, as well as the economic benefits of these credentialed positions.Duration of security…

Remote access risks on the rise with CVE-2024-1708 and CVE-2024-1709

4 min read - On February 19, ConnectWise reported two vulnerabilities in its ScreenConnect product, CVE-2024-1708 and 1709. The first is an authentication bypass vulnerability, and the second is a path traversal vulnerability. Both made it possible for attackers to bypass authentication processes and execute remote code.While ConnectWise initially reported that the vulnerabilities had proof-of-concept but hadn’t been spotted in the wild, reports from customers quickly made it clear that hackers were actively exploring both flaws. As a result, the company created patches for…

Researchers develop malicious AI ‘worm’ targeting generative AI systems

2 min read - Researchers have created a new, never-seen-before kind of malware they call the "Morris II" worm, which uses popular AI services to spread itself, infect new systems and steal data. The name references the original Morris computer worm that wreaked havoc on the internet in 1988.The worm demonstrates the potential dangers of AI security threats and creates a new urgency around securing AI models.New worm utilizes adversarial self-replicating promptThe researchers from Cornell Tech, the Israel Institute of Technology and Intuit, used what’s…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today