September 4, 2019 By Brett Valentine 4 min read

Network segmentation is a concept that dates back to the start of enterprise IT systems. The simplest demonstration of this is separating application and infrastructure components with a firewall. This concept is now a routine part of building data centers and application architectures. In fact, it’s nearly impossible to find examples of enterprises without some network segmentation model in place.

More recently, many have stated that microsegmentation is sufficient to secure these services. Microsegmentation techniques provide granular point-to-point traffic restrictions between services and can be user-session aware. But the modern concept of network segmentation is more than source and destination restrictions. Best practices for network segmentation require the following capabilities:

  • Intrusion detection and prevention systems (IDS and IPS) to detect and block malicious traffic based on known CVEs, behavior-based patterns and industry intelligence
  • Antivirus and malware detection to detect and block virus and malware behaviors within traffic
  • Sandboxing to execute and process traffic in a “safe” virtual environment to observe the results before passing it on if it’s valid traffic
  • Web application firewalls to detect and block application-based threats
  • Distributed denial-of-service (DDoS) protection to block brute-force and denial-of-service attacks
  • SSL decryption and monitoring to gain visibility and be able to respond to traffic

In an on-premises scenario, next-generation firewalls provide most of these capabilities. Ideally, the firewalls only allow traffic on valid ports. But regardless, these firewalls can inspect traffic on all ports, including the open valid ports (e.g., 80, 443) to ensure malicious behaviors are being transmitted.

In an Amazon Web Services (AWS) environment, there is nothing that provides these capabilities between services, but there are many capabilities that can be combined to do so. These threats must be mitigated through careful security configuration.

How to Achieve Network Segmentation in AWS

Let’s assume an example application running on AWS has four components: content on S3, Lambda functions, custom data processing components running on EC2 instances and several RDS instances. These reflect three network segmentation zones: web, application and data.

Inbound traffic is sent to static or dynamic pages in S3. These pages initiate Lambda functions to manipulate and transform the data provided. The Lambda functions call custom complex logic served by systems running on EC2 instances. The Lambda functions and the EC2 systems interact with multiple RDS databases to enrich and store the data in various formats. In real life, these components would use a lot of other AWS configurations and policies, but it suffices for this discussion.

Consider the typical behavior of application developers: They leave security controls loose and get the work done as fast as possible. The diagram below shows this nonsecured flow and an overlay of the desired network zones to be created:

The segmentation requirement needs multiple AWS configurations, including:

  • AWS Shield Advanced;
  • AWS WAF;
  • VPC – Private Subnet;
  • VPC – Public Subnet;
  • VPC – Internet Gateway;
  • VPC – Route Table;
  • VPC – Security Groups;
  • VPC – Network Load Balancer;
  • Virtual Next Generation Firewalls; and
  • AWS Cloud Watch.

How Network Segmentation Works

The inbound traffic requests are first screened by AWS Shield. This prohibits DDoS attacks and certain other disruption vectors. The request is then analyzed by the AWS WAF to restrict things like SQL insertion, scans for various CVE and IP whitelisting (depending on the nature of the application needs). The inbound traffic is then sent to S3.

Next, Lambda functions manipulate and translate the data provided. All of this processing is done in publicly accessible services in AWS. Security for the next steps in the processing are within a VPC.

The traffic from Lambda is sent through an internet gateway and then routed to a network load balancer. The load balancer redirects to one of several virtual next-generation firewalls. Why do we need an LB and multiple firewalls? For redundancy and capacity, of course. These firewalls apply IDS/IPS, malware, sandboxing and sometimes SSL decryption for packet-level inspection by a security information and event management (SIEM) solution.

Next, the request is sent to the VPC Routing Table. The Routing Table applies Security Group policies, which restrict the source, destination, ports and routes for the traffic to ensure that only specific services can communicate. This Route Table also differentiates between public subnets (i.e., externally accessible, in this case for EC2 application servers) and private subnets (i.e., databases). All of the processing done by the VPC is captured in VPC Flow Logs and routed to the SIEM system, which is likely hosted on-premises or elsewhere.

This model, with appropriate policies applied at each component, can achieve all of the network segmentation requirements described above.

Complexity Considerations and a Call to Action for Vendors

This traffic routing is obviously much more complex than a traditional system. Complexity is costly, and it increases the opportunity for errors and configuration gaps, not to mention the operational burden.

This routing will also impact performance. If this model protects a time-sensitive transaction such as an e-commerce site, it needs to be evaluated and optimized. But given the speed and performance within AWS, most users’ browsers and network connections are likely too slow to notice a difference. For transactions that are not extremely time-sensitive, this model will work fine.

Still, these capabilities and the need to segment network traffic have existed for a long time. AWS and the various network security vendors need to establish a more complete solution to offer within a VPC.

More from Cloud Security

Cloud security evolution: Years of progress and challenges

7 min read - Over a decade since its advent, cloud computing continues to enable organizational agility through scalability, efficiency and resilience. As clients shift from early experiments to strategic workloads, persistent security gaps demand urgent attention even as providers expand infrastructure safeguards.The prevalence of cloud-native services has grown exponentially over the past decade, with cloud providers consistently introducing a multitude of new services at an impressive pace. Now, the contemporary cloud environment is not only larger but also more diverse. Unfortunately, that size…

The compelling need for cloud-native data protection

4 min read - Cloud environments were frequent targets for cyber attackers in 2023. Eighty-two percent of breaches that involved data stored in the cloud were in public, private or multi-cloud environments. Attackers gained the most access to multi-cloud environments, with 39% of breaches spanning multi-cloud environments because of the more complicated security issues. The cost of these cloud breaches totaled $4.75 million, higher than the average cost of $4.45 million for all data breaches.The reason for this high cost is not only the…

Accelerating security outcomes with a cloud-native SIEM

5 min read - As organizations modernize their IT infrastructure and increase adoption of cloud services, security teams face new challenges in terms of staffing, budgets and technologies. To keep pace, security programs must evolve to secure modern IT environments against fast-evolving threats with constrained resources. This will require rethinking traditional security strategies and focusing investments on capabilities like cloud security, AI-powered defense and skills development. The path forward calls on security teams to be agile, innovative and strategic amidst the changes in technology…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today