The Next-Generation Data Center: Implementing Effective Internal Segmentation

The Next-Generation Data Center: Implementing Effective Internal Segmentation

Executive Summary

Enterprise architects and security managers have often wanted but shied away from secure segmentation of their internal networks. Wanted, in order to provide a much higher level of security against lateral attacks and deeper visibility into network use; shied away from, for fear that they would be creating chokepoints and hindering business innovation and service agility. New generations of security solutions have been built with internal segmentation in mind. They are built to scale both in terms of throughput and in terms of management, are automation friendly, and can flexibly accommodate changes in how services are provided, and the introduction of new services and user communities. With proper implementation, they turn security into an enabler of business and of innovation by delivering improved protection, control, and compliance, at speed.

Download Report Here

The Issue: Fluid Service Delivery Meets Adaptive Persistent Threats

Enterprise security is caught on the horns of a grim dilemma: on the one hand, the enterprise is taking advantage of a broader set of IT services, provided from a broadening set of resource pools, and serving an untethered and changing community of users; on the other hand, threats to the enterprise are subtler, more widespread, and more powerful than ever before, and that broadening set of options is hugely expanding the threat surface of the enterprise.

Preying on the enterprise in this context, on its staff and its customers, partners, and suppliers, is the cybercrime black market, driven by players motivated by money, politics, or the quest for competitive advantage. Blind mass infection by malware cast to the winds and blatant attempts to penetrate firewalls have been eclipsed by targeted attacks reaching out along multiple vectors of approach—spearphishing emails plus infected websites plus social engineering combined with network breach attempts, application-level attacks, and denial of service attacks. The goal now is to achieve any exploitable toehold in the environment—a compromised laptop, a compromised staff account, a compromised server anywhere in the infrastructure—and then use that as a platform for lateral attacks based on weak internal protections at the account, system, and network levels.

Security needs to be protecting the enterprise from adaptive, persistent, and proliferating attacks without stifling innovation and flexibility. With so much effort focused on getting a base from which to launch attacks laterally, disrupting the ability to move laterally would be a huge improvement. Internal segmentation of the network provides a means for doing so.

Internal Segmentation To Harden the Target

Segmenting a network—using firewalls or other systems to block or filter the traffic allowed to flow from one place to another based on security policies—makes it harder for one compromised system to reach and breach others by restricting the scope within which it can hunt for vulnerable systems. Like the watertight compartments in a ship, a leak in any segment can be contained within that segment and danger to others and to the organization as a whole can be minimized.

Segmentation is particularly powerful because a great many attacks center on using direct network access to a system to find a way into it: an unsecured application or network port, a service that can be subjected to a denial of service attack and forced to crash the system or allow attackers to run malicious code, etc. If the attackers’ platform cannot even speak to the system they want to breach, if their packets never reach it or are discarded on arrival, most attacks are prevented rather than having to be detected and stopped in progress.

IT uses segmentation sparingly on data center networks because doing so with traditional firewalls can create an environment that is both rigid and brittle, as well as tough to scale.

Rigid and brittle. The more complex the environment becomes, the more complex the firewall rulesets required to accommodate it become, and the harder it gets to keep the rules consistent and complete. Security staffs tend to be oversubscribed and rule changes are anything but quick and simple, needing both thorough testing and passage through a change management process. When every shift in how services are provided or consumed potentially requires one or more changes to firewall rulesets across a broad set of internal firewalls, the “drag” of making each change provides a strong disincentive to change and a significant brake on it. This is the antithesis of security enabling business and to supporting innovation

Tough to scale. The typical enterprise firewall is architected and priced on the assumption that it is dealing primarily with external traffic: the flows in and out of a data center. If put into a situation where they need to deal with intra- and inter-data center flows—not primarily users talking to systems but the full array of systems talking to each other—they may have to cope with orders of magnitude greater volumes of traffic and numbers of connections. As the enterprise continues to realize the dream of the SOA via the current shift to microservices and containers, the number of endpoints inside the data center, and of conversations among endpoints, and the volume of system-to-system traffic will only continue to climb.

Agility, Scale Not Optional

So going forward, to secure the environment IT is going to have to segment it more—and to segment it more, IT is going to need to use something other than traditional perimeter firewalls. IT must deploy systems that scale, and that can deal with rapid change in the environment, and with the continuing shift from simple north-south flows to complex east-west flows.

1

A new solution must scale to handle thousands to millions of flows for highly-leveraged service components, and terabytes of cumulative throughput across a data center.

To handle so much traffic, and changing patterns in it, enforcement must either be embedded in the network infrastructure in multiple locations, or dispersed in the form of virtual appliances, or in container- or host-resident agents.

Each option offers a different balance of strengths.

Agent-based Segmentation Agents riding along with the payloads they protect in a datacenter or cloud allow deep granularity and 100% control of what the service sees and reacts to. They can be deployed as workloads are deployed. But, these agents have to compete for resources with the payload they are protecting. And (by definition) each agent can only protect one thing, even if an army of containers have the same basic security needs (listen to web front ends, talk to databases, ignore the rest) and minimal numbers of connections. An agent-based approach can therefore result in wasteful, redundant allocation of resources to achieve segmentation.

Virtual-Appliance-Based Segmentation Using virtual appliances, IT cannot achieve quite the same level of granular, white-on-rice protection as with agents.

2

However, with virtual appliances IT can put multiple resources behind a single shield, reducing duplicative resource consumption (though not segregating the protected workloads from each other). Virtual appliances can move, like their charges, and spinning up new instances to distribute workloads is simple. They too, though, compete with their charges for resources, and can run into problems scaling to meet high-throughput and high-flow-count demands.

3

Physical Appliances Dedicated hardware can provide the highest scale in terms of both throughput and connections, especially if based on optimized chipsets rather than general purpose CPUs. They do not compete with protected workloads for resources, either. And, they can be positioned anywhere in a network. However, they are necessarily a coarser-grained approach: segments have to be fewer and larger, and the “perimeter of one” strategy is not possible for virtualized or containerized workloads, so more systems will remain vulnerable to compromise by other systems inside their shared segment. And, although the throughput available can be quite high, the scaling option is adding appliances, usually a larger and more costly increment to capacity compared to adding some server resources.

Deciding which approach or combination of approaches is best in a given data center will depend on IT assessing carefully the environment needing segmenting. It must consider the degree to which it knows and can predict

  • the number of endpoints needing protection
  • the number of active communications flows each will be a party to
  • the volume of traffic each flow will generate
  • the duration of flows

So, for example, an organization that focuses dynamic allocation and deallocation of virtual servers within a large set of separate web farms, in each of which

  • the compromise of one web server from another is a low risk event
  • millions of flows will exist simultaneously
  • more than 500Gbps of traffic will flow, constantly • web servers will speak to a steadily changing set of back-end services

might want to place a physical firewall between each web farm and everything else, filtering traffic both to and from the world and to and from all other parts of the infrastructure. (And, in fact, the likelihood of lateral compromise within the farm could be pushed close to zero by not letting the nodes behind the firewall talk to each other.)

On the other hand, an organization in the early stages of decomposing its primary inward facing application into microservices running in containers and seeing rapid changes in traffic flows as a result, but expecting

  • hundreds of flows at most to any given container
  • short-lived flows of low volume adding up to under a 1 Mbps for any given container

might want to spin up a virtual firewall on each host node to control the flow of traffic at a fine grain as its understanding of who needs to talk to whom evolves.

Management is Crucial To Scale and Resilience

Scalable management is in some ways the most important feature of a segmentation solution for the new data center, since even the most powerful platform would be bound into the same rigidity and brittleness as previous generations of firewalls if it did not have a better approach to management.

A decade ago, per-firewall rule lists defined solely at the address/port/protocol level were a reasonable approach to securing infrastructure. Servers didn’t move, and were the basic unit of service provisioning in the data center. Today, servers are virtualized, movable, and new instances are deployed via automation as needed. In the future, the basic unit of service is likely to become the container: smaller, far more numerous, even more mobile.

So, today and in the future, to do internal segmentation properly IT will need to

  • identify vulnerable/important assets based on traffic flows and other factors
  • group entities logically into classes, automatically, as they come on line
  • apply high-level policies to those classes.

IT will have to be able to work on class definition and policy definition in a central management platform, which will propagate policies out to all the cooperating, distributed enforcement points as well as serving as the collection point for all monitoring information from the firewalls.

And, because network security requires the involvement of more than just security staff, others, such as systems administrators, service delivery managers, or application owners will need visibility into the workings of the segmentation tools. But, they need it without administrative access to the system: they need to see but not touch! So, scalable, enterprise-grade management requires securely distributed visibility via role-based access to monitoring, reporting, and analytics.

Conclusion and Recommendations

To address the rapidly evolving worlds of east/west-heavy hybrid service delivery and anywhere/anytime/any device service consumption, IT needs to reconsider use of internal segmentation for security. It has to approach the challenge by thinking outside the traditional perimeter firewall box, looking for solutions that scale better than traditional ones not just in terms of traffic filtered but also, and more urgently, in terms of manageability and price. Architects and security staffs should:

  • Assess and prioritize the system and information assets in your environment, based on what kinds of information different services manage, what services different systems provide, and how traffic needs to flow among systems; this guides segment definition
  • Examine traffic volumes and flow counts to gauge what kinds of solutions might meet your needs
  • Consider a hybrid approach incorporating host-based, virtual, and physical protections ‘
  • Select as much for the robustness of policy definition, centralized management, and role-based access to monitoring, reporting, and analytics as for throughput and connection count