Introduction -- Causes and Impact of Packet Duplication
Seamless monitoring of network traffic is critically important for enterprise network and security administrators: it lies at the very foundation of network threat detection and remediation. Regrettably, packet duplication has commonly been an undesirable side-effect of network traffic monitoring. Packet duplication produces redundant information in the monitoring traffic which can overload monitoring tools resulting in packet drops and/or false positive errors. Also additional bandwidth may be required for backhauling duplicate traffic to monitoring applications and compromising recording of voice and video data.The advent of 100 Gigabit-per-second networks has compounded the challenges associated with packet duplication since the elapsed time between packets could be less than 10 nanoseconds, which makes the penalties very severe for processing redundant information at such speed.
The common cause of duplicate packets in network monitoring traffic is through the use of port mirroring feature of network switching devices, which is also known as a Switched Port Analyzer (SPAN). Using SPAN is a very common method of implementing network visibility. This feature, included in most enterprise-grade switches and routers, is a technique for copying of packets traversing one or more switch ports sent to one or more network analysis tools.
The use of the SPAN port mirroring feature inevitably creates packet duplication. Specifically, port mirroring can be configured to only specify for packets into or out of a switch port. However, typically network administrators want a copy of both. The problem is when both the ingress and egress ports are mirrored; this results in duplicate packets being seen by network analysis tools. The timestamps may be different but the packet contents are the same. This challenge will be further compounded when SPAN is used for multiple connected devices.
To avoid the adverse effects of packet duplication, the monitoring tools themselves may be may be forced to remove packet duplicates prior to implementing traffic analysis of the traffic. This which presents a number of challenges including bandwidth consumption on monitoring tools and consumption of precious processing resources on analysis tools impacting the CPU resources available for critical network analysis functions. The potential drop and processing resources available to the monitoring and analysis tool when it performs its own deduplication, can be as high as 50%.
What is Packet Deduplication?
Packet deduplication refers to the capability for removing packet duplicates prior to network data being forwarded or transmitted to network analysis tools for the purpose of monitoring, analyzing, and recording. This typically causes a substantial reduction in the volume of traffic handled by such tools enabling an increase in their operational efficiency, a reduction in false positive errors generated, and a elimination of security breach that could exist in implementations without deduplication. Without duplicate packets being identified and removed first, analysis tools may generate erroneous alarms and/or produce compromised data and results.
In some cases, advanced network switches implement a basic level of software-based Layer 2 (L2) and Layer 3 (L3) deduplication as an optional feature on a per-port basis prior to forwarding traffic to an inline security tool. L2 deduplication removes identical Ethernet frames where the Ethernet header and the entire IP packet match, while L3 deduplication removes TCP or UDP packets where only the IP packet match. In such a case, the network switch checks for duplicates and removes only the immediately-previous packet if the duplicate arrives within a fixed type interval (typically in the order of a millisecond) of the original packet.
Deduplication and Network Packet Brokers
As enterprise network architectures continue to expand, network bandwidth levels continue to dramatically increase, newer tools for security, performance management, and monitoring keep getting deployed, a comprehensive network visibility layer is required.
That is exactly why in recent years network packet brokers (NPBs) have come into play. Advanced network packet brokers allow a high-level of deep packet inspection and processing including aggregation, filtering, and load balancing of traffic across the range of security and monitoring tools at data rates of up to 100 Gb/s.
With fine-grained, hardware-based deduplication typically built into advanced NPBs, the challenge of duplicate packets degrading performance of security and monitoring tools is tackled completely. Instead, packets are sent to an internal packet processor for fine-grained, flexible flow-based deduplication and optimized for delivery to everything from IDS and IPS to forensics, network analyzers, and more.
Flow-based deduplication permits elimination of duplicate packets using the range of attributes shared by IP packets in a flow including source IP, destination IP, protocol, source port and destination port. It may also include information such as number of bytes and packets transmitted, inbound and outbound interfaces, CoS/QoS markings, and TCP flags.
Hardware-based deduplication in NPBs also allows flexible setting of deduplication windows up to 500 milliseconds. This means that any duplicate packets received within that sliding window will be removed. Another important feature is the ability to perform full packet comparison, and not just header comparison. Network managers will also appreciate the ability to select any header field in which to specify which attribute will not be considered as a ‘source’ of difference between packets. For example to ignore the TTL field. In this case if two packets are similar, but they only have a different TTL, than the duplicate will be removed.
The overall results of the deduplication capability provided by network packet brokers are dramatically higher elimination of duplicate traffic, support of bandwidth levels of up to 100 Gb/s, and a parallel increase in efficiency of attached network security tools.
Deduplication in Niagara Network Visibility Fabric
Niagara Networks provides all the building blocks for an advanced Visibility Adaptation Layer at all data rates up to 100Gb, including taps, bypass elements, packet brokers and a unified management layer support.
The Niagara modular N2 visibility node product series provides a single multi-purpose platform that covers all the visibility adaptation scenarios in a network including one network link to one tool (one-to-one), one network link to multiple tools (one-to-many), multiple network links to one tool (many-to-one) and multiple network links to multiple tools (many-to-many) – interlaced and load balanced into a network-wide fabric.
The series can be populated with a wide range of high-density, high-versatility, processor-accelerated modules. It supports a range of functional capabilities, including network tap, bypass, packet broker and deep packet processing applications.
In their role as packet processing and analysis engines, Niagara’s N2 modular visibility node provides hardware-based, fine-grained support of deduplication of redundant (duplicate) packets before they reach analysis or security tools through the use of Packetron, an open system packet processor module. Packetron offloads packet processing from the N2 host, and bridges the gap between increased network traffic throughput and traffic processing needs for fine-grained, flow-based deduplication addressing the range of data duplication scenarios. The Packetron module is based on Intel’s Xeon x86 architecture and can directly process traffic from any one of the Niagara N2 NPB’s 2.56 Tbps traffic streams.
Niagara Networks are industry specialists in network visibility, providing advanced network solutions for the specific needs of individual enterprises and networks. Don’t leave your network vulnerable to security threats, schedule a consultation with one of our network visibility experts today to evaluate your network visibility challenges.