Peek inside any enterprise security operations center (SOC) today, and you’ll likely see a crowded and high-pressure cybersecurity ecosystem. Over the past few years, as technology evolved rapidly, attackers have developed a growing array of strategies and tactics. In response, security organizations have deployed more and more tools and point solutions, engaged with increasing numbers of vendors and service providers and collected ballooning volumes of data. 

To what end? According to research conducted by Forrester Consulting on behalf of IBM, the average organization’s cybersecurity ecosystem now involves 25 disparate security products and services from 13 different vendors, with major enterprises using more than 100 unique security tools.

But as the Ponemon Institute’s most recent Cyber Resilient Organization Report reveals, security operations programs using fewer than 50 security solutions are 9% better at detecting attacks and 7% better at defending against attacks than SecOps programs using more than 50 solutions. How did we arrive at this point where more solutions means more problems?

How the Cybersecurity Ecosystem Grew

To answer this question, we’ll trace the evolutionary history of today’s cybersecurity ecosystem. Throughout the development of networked computing, increasing systems’ connectivity and improving functionality was generally seen as more important than decreasing security risks. Early systems were simply not designed with security in mind. Once discovered, openings in the cybersecurity ecosystem were addressed in a reactive fashion.

Today’s IT systems are already far more complex than the first large-scale general-purpose computing networks, of course. And as they swell to include rising numbers of mobile devices, Internet of things (IoT) sensors, cloud services and software-as-a-service (SaaS), they’re only getting more complex. As long as enterprises continue to respond to this expansion in a reactive fashion, however, the excesses — too much data, too many alerts, too much code, too many silos and too much work for security teams — will remain.

The Early Era: 1970s – 1980s

The earliest computer viruses and “worms” in the cybersecurity ecosystem were tests rather than malicious in nature. Creeper, widely considered the world’s first virus, was developed by researchers exploring how code could travel across networks and replicate itself. The Morris Worm, another early malware program, was intended to gauge the size of the internet. Running the code taxed the resources of the computers it infected, causing many of them to crash — a forerunner to today’s distributed denial of service (DDoS) attacks, but on a much smaller scale. The damage that such stunts could cause was limited, since most computers at this time only networked locally, if at all.

The Birth of the Firewall: Early Corporate Networks (1988-1992)

As network connectivity became more popular in enterprise computing and awareness that viruses could damage the cybersecurity ecosystem grew, researchers at Digital Equipment Corporation (DEC) created and ran the first internal corporate firewall in 1988. Their design became the basis for the first commercial firewall, DEC SEAL, which shipped in 1992.

The developers modeled the network firewall after the architectural structures used to prevent fires from spreading across large buildings. Routers separated networks into sub-groups and logged and screened all traffic before allowing it to pass between them. The idea was to ensure an attack affecting one part of the network could no longer compromise the whole. All traffic filtering occurred on the basis of prewritten rules, so firewall designers had to know which sources were likely malicious before they could write these rules — a reactive approach.

Dawn of the Internet: The Antivirus Era (1990s)

In the early 1990s, the National Science Foundation (NSF) lifted all restrictions on commercial use of the Internet. Within a year, and following the introduction of Mosaic, the first multimedia World Wide Web browser, Internet usage began doubling every three months. Threat actors caught on to the trend right away: as the Internet grew, malware designed to infiltrate corporate networks and consumers’ home computers — stealing data and destroying systems along the way — spread rapidly as well.

Early antivirus programs compared signatures (file hashes and, later, code strings) from known malware with files on users’ computers to spot infection. Not only was this approach to preventing virus infection reactive by design, but it was research-intensive. Specific features of the malicious code had to be detected before updates could be delivered to an antivirus company’s customers.

This approach positioned defenders one step behind attackers, awaiting their next move. In addition, antivirus software tended to have high false positive rates when it came to detecting malware. And, antivirus products were often resource-intensive to run, interfering with user experience or malfunctioning if used in conjunction with other vendors’ antivirus tools.

Reacting to a Problem: The Firewall as Single Point of Failure

Firewalls protected only a single entry point into the network. Therefore, they were useless if an intruder found another way into the cybersecurity ecosystem. In response, defenders developed Intrusion Detection Systems (IDS) to look for system changes or streams of packets on a network that were connected with known attacks. Early systems were network-based, meaning they collected and scanned traffic packets to find signs of malicious traffic.

To be effective, IDS required a database of indicators to determine whether traffic was threatening. So, like antivirus software products, these systems depended on prior knowledge of current threats. IDS collected growing amounts of data on the health and status of IT environments, which security analyst teams then monitored and analyzed.

Rise of the SIEM: The Compliance Era

High-profile incidents ranging from the Yahoo! logic bomb to the spread of Microsoft Windows 98 bugs attracted increasing amounts of public scrutiny. Governments began to require companies to follow better cybersecurity practices. Enterprises encountered new legal mandates to collect and maintain network log data for audits.

Where could security teams store all this data so that they could retrieve it for use in incident investigations? How could they make it easy to understand? To solve these problems, security information management (SIM) tools came into widespread use. By this stage, most SOCs used an assortment of security tools in their cybersecurity ecosystem, often from multiple vendors. Teams needed a centralized system that could automate the collection of log data from firewalls, antivirus software and other sources of endpoint and network telemetry data. They also needed a way to translate the raw log data into common formats that were simpler to search and report on.

But searching the cybersecurity ecosystem for large volumes of log data was complex, lengthy and usually only performed in the aftermath of a known incident. To make it possible for teams to detect in-progress attacks, vendors expanded SIM tools into system information and event management (SIEM).

The event management component of this technology involves spotting red flag activities. Analysts create rules or algorithms to do this that reflect the specifics of their networks. The adoption of SIEM technologies meant that security operations teams were now responsible for more programming —  and handling still more data — than they had been in the past.

Cloud Computing Makes the Cybersecurity Ecosystem More Complex

Amazon Web Services (AWS) launched in 2006. By the late 2000s, public cloud computing’s worldwide rise had blossomed as providers made previously unheard-of computing power available on a subscription basis. The benefits of cloud computing are legion, but it also has drawbacks. It makes the cybersecurity ecosystem much more complex and dissolves walls between networks.

Security organizations have continued to add point solutions to address challenges that cloud computing has brought, often adding separate teams to manage these new tools. In many cases, they brought in solutions for different endpoints, hardware, and tools side-by-side. The task of merging the data, policy and management of these behemoth security architectures is not easy.

Future Cybersecurity Ecosystem Challenges

Things won’t get simpler in the near future. IoT adoption will increase the number of connected devices. At the same time, 5G networking will speed up demand for mobile devices to access corporate computing resources. Meanwhile, ‘big data’ and newly powerful analytics dependent upon quantum computing will exponentially increase the number of operations that can be performed in any given time frame.

The challenges remain. Teams will continue to struggle to maximize the value of the tools they already have in place in their cybersecurity ecosystem. This solution is to put systems in place that will allow those tools to freely exchange data. This, in turn, can power analytics and support automated incident response workflows. What’s needed is an open ecosystem where cybersecurity products can work together without the need for customized connectors. In this world, vendors, end users and cybersecurity researchers can meet to develop code, standards and practices that security operations teams can share over the entire threat management life cycle.

This vision of the future is already here. In the Open Cybersecurity Alliance project, a number of the world’s leading cybersecurity organizations have come together in support of seamless interoperability. The focus of the project is to make data interchange easier within security operations. It’s chasing the dream of making sure cybersecurity tools can work together simply and effectively.

More from Intelligence & Analytics

New report shows ongoing gender pay gap in cybersecurity

3 min read - The gender gap in cybersecurity isn’t a new issue. The lack of women in cybersecurity and IT has been making headlines for years — even decades. While progress has been made, there is still significant work to do, especially regarding salary.The recent  ISC2 Cybersecurity Workforce Study highlighted numerous cybersecurity issues regarding women in the field. In fact, only 17% of the 14,865 respondents to the survey were women.Pay gap between men and womenOne of the most concerning disparities revealed by…

Protecting your data and environment from unknown external risks

3 min read - Cybersecurity professionals always keep their eye out for trends and patterns to stay one step ahead of cyber criminals. The IBM X-Force does the same when working with customers. Over the past few years, clients have often asked the team about threats outside their internal environment, such as data leakage, brand impersonation, stolen credentials and phishing sites. To help customers overcome these often unknown and unexpected risks that are often outside of their control, the team created Cyber Exposure Insights…

X-Force Threat Intelligence Index 2024 reveals stolen credentials as top risk, with AI attacks on the horizon

4 min read - Every year, IBM X-Force analysts assess the data collected across all our security disciplines to create the IBM X-Force Threat Intelligence Index, our annual report that plots changes in the cyber threat landscape to reveal trends and help clients proactively put security measures in place. Among the many noteworthy findings in the 2024 edition of the X-Force report, three major trends stand out that we’re advising security professionals and CISOs to observe: A sharp increase in abuse of valid accounts…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today