[GigaOM, Juku and Bridgeworks] The promise of next-generation WAN optimization

Bandwidth, throughput, and latency aren’t issues when you are within a data center, but things drastically change when you have to move data over a distance. Applications are designed to process data and provide results as fast as possible, because users and business processes require instant access to resources. This is not easy to accomplish when data is physically far from where it is needed.

In the past decade, with the exponential growth of internet, remote connectivity, and, later, large quantities of data, lack of bandwidth has become a major issue. A first generation of wide area network (WAN) optimizing solutions appeared in the market with the intent of overcoming the constraints of limited bandwidth connectivity. Sophisticated data-reduction techniques like compression, deduplication, traffic shaping, caching, proxying, were integrated to minimize traffic between data centers and branch offices or for DC-to-DC communication. WAN optimization can contribute in improving the quality and the quantity of services delivered to branch offices, replicate storage at longer distances for disaster recovery or business continuity, reduce WAN costs, and improve mobile connectivity.

Recently things have changed. Traditional WAN optimization was mainly conceived for solving lack of bandwidth in a time when legacy protocols were designed for LAN connectivity. Data was neither compressed nor encrypted, and computers were unable to manage huge amounts of complex data. Now high-bandwidth links (10Gbs or more) are cheaper than in the past, new protocols are emerging, data is compressed and encrypted at the source, and even mobile devices can manage huge data streams. Traditional WAN optimization was simply not designed to efficiently manage these new requirements. Efficiency, utilization, and latency are the real issues now.

Next-generation WAN optimization, designed with a radically new philosophy, has the characteristics to offer unprecedented scalability, better latency management, and uncompromised link utilization. One new approach, rooted in a deep knowledge of storage, looks at the problem in a radically different way. It addresses the problem by keeping in mind modern types of data (compressed and encrypted) and focusing on mitigating latency issues while maximizing efficiency and predictability at scale. The result is overall TCO improvement, better latency management, and outstanding utilization of links.

A GigaOM and Juku report is available; This report is underwritten by Bridgeworks; To download the report: Gigaom Research – Bridgeworks

Leave a Reply