Never before has there been more WAN acceleration products leveraging a wider variety of technologies, with varying levels of results. With this in mind Jeff Aaron provides an overview and examines the various techniques for improving bandwidth, latency, and loss across the WAN.
There are a variety of WAN acceleration techniques that can improve application performance across distributed offices. Some focus on maximising bandwidth utilisation, others address latency (the time delay between when something is sent and when it is received), and still others address network integrity issues that can prevent the effective delivery of packets across the WAN.
The goal of WAN acceleration is to address all of these issues - bandwidth, latency and packet loss. Below are the most common techniques that address these challenges, resulting in maximum application performance across the WAN.
The most efficient way to accelerate the transfer of information across the WAN is not to send it in the first place. This is the major principle employed by WAN de-duplication (also knows as data reduction). A WAN acceleration appliance, equipped with this technology, will examine all data in real time prior to it being sent across the WAN and store this information in local data stores. Whenever duplicate information is detected, instructions are sent to the appropriate appliance to deliver the information locally instead of re-sending it across the WAN.
Using this 'network memory' technique can eliminate over 90 per cent of WAN traffic under the right circumstances. It provides various levels of improvement based on the application environment and the repetitiveness of the traffic. For example, interactive web traffic may see a 10-fold performance improvement, while large data backups may see a 100x improvement. Performance naturally increases in environments with lots of duplicate information and after data reduction appliances have had an opportunity to 'memorise' the network.
Compression is used to reduce the bandwidth consumed by the traffic travelling across the WAN. 'Payload' compression uses algorithms to identify relatively short sequences that are repeated frequently over time. These sequences are then replaced with shorter segments of code to reduce the size of transmitted data. 'Header' compression can provide additional bandwidth gains by reducing the information using further specialised algorithms.
The gains realised vary depending on the mix of traffic on the WAN but are fairly consistent across different solutions. Text can yield between 2-5 times compression ratios. On the other hand, pre-compressed content like zip files will generally yield no gains. Typically, practices deploying compression techniques on the WAN will effectively double their WAN bandwidth.
The time it takes for information to go from a sender to receiver and back is called network latency. Since the speed of light is constant, a minimum latency is directly proportional to the distance travelled between the two network endpoints.
However, latency is also impacted by queuing and processing in routers and other network elements along the path. Many file, email and document management systems leverage the transport control protocol (TCP), which has a variety of congestion control functions that can actually introduce quite a bit of latency. WAN acceleration devices often leverage a variety of TCP acceleration techniques to overcome this, including selective acknowledgements and adjustable window sizing. There are also 'chatty' protocols, such as Microsoft's common internet file system (CIFS), which also add latency.
Applications like this require hundreds or even thousands of round trips to successfully transfer a single file. This is typically not an issue when file servers are deployed on the same local area network (LAN) as clients. However, when CIFS is used across a WAN, as is the case when branch offices are accessing file servers located within a centralised data centre, both latency and bandwidth constraints can adversely impact file sharing performance. To overcome this, different approaches have been adopted, including 'read-aheads' and 'write-behinds', whereby requests are pipelined on behalf of the client to eliminate round-trip delays.
Network integrity (loss mitigation)
Even when the physical layer of a WAN is error-free, some technologies and provisioning practices still lead to packet loss at the network layer. In fact, it is possible to see network packet loss rates as high as 8 per cent in some MPLS and IP-VPN networks. When this type of loss is coupled with high latency and the retransmission and congestion-avoidance behaviour inherent to TCP, it is not surprising that application performance suffers across a WAN.
Forward error correction (FEC) is a technology that is well known for its ability to correct bit errors at the physical layer. This technology is often adapted to operate on packets at the network layer to improve application performance across WANs that have high loss characteristics. FEC works by adding an additional error recovery packet for every 'N' packet that is sent across the WAN. This FEC packet contains information that can be used to reconstruct any single packet within the group of N.
If one of these N packets happens to be lost during transfer across the WAN, the FEC packet is used on the far end of the WAN link to reconstitute the lost packet in real time. This eliminates the need to retransmit the lost packet across the WAN, which dramatically reduces application response time and improves WAN efficiency. An advanced implementation will dynamically adjust FEC overhead in response to changing link conditions for maximum effectiveness in environments with high packet loss.
Packet order correction (POC) is another useful technique to overcome out of order packet delivery. POC works by re-sequencing packets on the far end of a WAN link 'on the fly' to avoid re-transmissions that occur when packets arrive out of order. By performing the functionality in a dedicated WAN optimisation device (as opposed to an end station or router), enterprises have the scalability needed to handle high-volume, high-throughput data streams with minimal added latency.
Quality of service (QoS)
In an effort to maximise WAN utilisation, most enterprises will oversubscribe their WAN links. When demand exceeds the capacity of a WAN link, and traffic is contending for the same limited resource, less important traffic (such as web browsing) may take bandwidth away from business-critical applications. To prevent this, some WAN acceleration solutions implement quality of service techniques to classify and prioritise traffic based on applications, users and other criteria.
QoS typically involves three primary functions, which can all have a significant impact on application performance:
- packet marking, which is a means whereby network elements provide different levels of service to different packets, based on markings in the IP header;
- application classification, which enables different priorities and handling instructions to be applied to individual types of traffic;
- queuing and shaping, which improves traffic delivery through various congestion points in a network using queuing policies, dropping disciplines and service disciplines.
There are various challenges that can adversely impact WAN performance, including limited bandwidth, high latency and packet loss. In many instances, these issues are intertwined and require a combination of WAN optimisation techniques to ensure maximum application performance. Using a subset of the required tools can lead to underwhelming or unexpected performance results. As an analogy, doing data reduction without loss mitigation is like going 100mph on a freeway riddled with potholes. Similarly, doing latency mitigation without QoS is like driving a Ferrari on a heavily congested road.
The most effective WAN acceleration solutions use numerous optimisation techniques to address all WAN challenges at a macro level. By understanding these optimisation technologies - and the challenges they address - one can better understand WAN acceleration and the role it can play throughout the enterprise.
Jeff Aaron is director for product marketing at Silver Peak Systems.