Jeff Aaron, director, product marketing at Silver Peak looks at an increasingly important enabler of strategic IT initiatives, including branch office server and storage centralization, wide area file sharing and business continuity planning.

Recent advances in technology have enabled WAN acceleration to transition from a tactical fix to a strategic IT investment. However, these advances have also led to increased confusion in the marketplace.

Never before have there been so many WAN acceleration products available, leveraging a wider variety of technologies, with varying levels of results.

Partly as a result of this, and the relative newness of the technology, there is a lot of confusion and uncertainty by about some of the more common WAN acceleration technologies being employed today.

This article will look at how these technologies work and why they are important.

Data reduction

The most efficient way to accelerate the transfer of information across the WAN is to not send it in the first place. This is the major principle employed by data reduction, a new WAN acceleration technology that provides significant benefits in the form of increased WAN bandwidth efficiency and reduced application response time.

In a data reduction scenario, acceleration appliances examine all data in real-time prior to it being sent across the WAN. This information is stored in local data stores on each appliance.

Whenever duplicate information is detected, references are sent to the appropriate appliance instructing the device to deliver the information locally (instead of re-sending it across the WAN). By preventing repetitive information from traversing the WAN, data reduction can reduce over 90 per cent of WAN bandwidth.

By delivering information from local data stores, data reduction helps to provide LAN-like performance across the WAN.


There are various types of compression used, and the term itself refers to a technique employed to reduce the bandwidth consumed by traffic traversing the WAN by compressing the files travelling across it.

Payload compression uses algorithms to identify relatively short byte sequences that are repeated frequently over time. These sequences are then replaced with shorter segments of code to reduce the size of transmitted data.

Simple algorithms can find repeated bytes within a single packet; more sophisticated algorithms can find duplication across packets and even across flows.

Header compression can provide additional bandwidth gains by reducing packet header information using specialised compression algorithms.

The gains realised by compression techniques vary depending on the mix of traffic traversing the WAN, but are fairly consistent across different vendors' solutions. Text and spreadsheets, for example may yield 2-5x compression ratios.

On the other hand, pre-compressed content, like zip files, cannot be compressed much further. Therefore, additional compression does not help these file types.

Enterprises deploying compression technology will typically see around a 50 per cent improvement in WAN utilisation, which is the equivalent of doubling the effective WAN bandwidth.

Some additional benefits may be garnered from solutions that apply compression across various flows of traffic (called crossflow compression) and can employ compression techniques on UDP traffic.

VoIP, for example, can significantly benefit from UDP header compression when used in conjunction with other techniques, such as packet coalescing (see below).

Latency mitigation

The time for information to go from a sender to receiver and back is called the latency of the network. For example, since the speed of light is constant, minimum latency is directly proportional to the distance travelled between the two endpoints of communication.

In other words, the longer the distance, the longer the minimum delay. In real-life, the latency is also impacted by queuing and processing delay in routers and other network elements along the path.

Latency often has a big impact on the performance of applications across the WAN. For TCP bulk data transfers, latency can severely limit throughput. This is primarily because TCP congestion control limits the amount of unacknowledged data in transit.

Once the amount of unacknowledged data reaches the congestion window size, transmission of new data is postponed until older data is acknowledged.

Going up the stack there are several acceleration techniques that are used to overcome the latency issues associated with application delivery across a WAN. These include:

TCP acceleration

The TCP protocol was designed to operate reliably over almost any transmission medium regardless of transmission rate, delay, corruption, duplication, or reordering of segments.

However, the introduction of high speed telecommunications links has resulted in ever-higher transmission speeds, which often exceed the domain for which TCP was originally engineered.

CIFS acceleration

Common internet file system (CIFS) is a protocol developed by Microsoft for remote file access that allows most applications to open and share files across the internet or other IP based networks.

Some specific capabilities of CIFS include file access, record locking, read / write privileges, change notification, server name resolution, request batching, and server authentication

Application-specific acceleration

Some vendors perform application-specific latency optimisation techniques to improve the performance of specific types of traffic across the WAN, including SQL, HTTP and Microsoft's Messaging API (MAPI). Furthermore, prefetching of content can be used to overcome latency when delivering some applications across the WAN.

Quality of service (QoS)

In an effort to maximise WAN utilisation, most enterprises will oversubscribe their WAN links. When demand exceeds the capacity of a WAN link, and traffic is contending for the same limited resource, less important traffic (such as web browsing) may take bandwidth away from business-critical applications.

To prevent this, some WAN acceleration solutions implement quality of service techniques to classify and prioritise traffic based on applications, users, and other criteria.

QoS involves several functions: 1) classification of packets into traffic classes based on characteristics such as source, destination addresses, and/or applications and 2) queuing and service mechanisms that are used to apply service policies based on these classifications, including bandwidth allocation.

Loss mitigation

Even when the physical-layer of a WAN is error-free, some technologies and provisioning practices still lead to packet-loss at the network layer. In fact, it is not unusual to see network packet loss rates as high as 8 per cent in some networks.

When this type of loss is coupled with high latency and the retransmission and congestion-avoidance behaviour inherent to TCP, it is not surprising that application performance suffers across a WAN.

Forward error correction (FEC) is a technology that is well known for its ability to correct bit errors at the physical-layer. It works by adding an additional error recovery packet for every 'N' packets that are sent across the WAN.

This FEC packet contains information that can be used to reconstruct any single packet within the group of N.

If one of these N packets happens to be lost during transfer across the WAN, the FEC packet is used on the far end of the WAN link to reconstitute the lost packet.

This eliminates the need to retransmit the lost packet across the WAN, which dramatically reduces application response time and improves WAN efficiency.

Packet coalescing

When packets are small, packet headers consume substantial bandwidth in comparison to the amount of end-user data transferred. Packet coalescing combines multiple user packets travelling between the same two sites into a single coalesced packet.

Used in conjunction with header compression, this amortises a single header over multiple packets thus decreasing overhead, and therefore bandwidth requirements. Packet coalescing is particularly beneficial for web applications, VoIP, and interactive applications, like Citrix.

Wide area file services (WAFS)

WAFS is a widely misused term that has sometimes come to mistakenly represent any solution that accelerates the performance of file services. In reality, WAFS is a caching technology that makes it possible to access remote file services as though they were local.

Like other caching solutions, a WAFS appliance simulates an application server, enabling local delivery of specific content. It sits between clients and applications, watching all requests and locally saving copies of the responses.

If another request is made for the same object, the WAFS appliance acts as a proxy, delivering information directly without having to go back across the WAN to the original server.

WAFS enables users to access files services faster, and reduces costs, as remote offices don't need file servers, backup equipment, and additional staff to look after remote office storage concerns. Also remote data is consolidated in the data centre and thereby held more securely.

However, WAFS only accelerates file services, and is therefore not as cost effective when compared to other WAN acceleration solutions that also accelerate other key enterprise applications, such as email, web, VoIP, etc.

In addition, because WAFS is using caching technology, it has a potential for coherency issues. In other words, when clients are retrieving and modifying information stored in a local cache, it is easy for this information to get out of synch with information stored in the original application server.


There are many technologies available to address the various issues associated with application delivery across a WAN. The best solution will implement a variety of techniques, both old and new, to improve bandwidth efficiency while reducing perceived application response time.