Online commerce is in its prime. Never before has a similar priority been placed on Web presence by organizations in every industry. Even brick-and-mortar companies are scrambling to put their best foot forward in the online world and not be eclipsed by their more technology-savvy competitors. E-commerce vendors spend huge amounts of resources on elaborate, logical websites that are designed to lead the end-user to conversion. There is one factor, however, that is commonly overlooked in the process: performance.
Since the proliferation of broadband Internet, user expectations of website performance have skyrocketed. A myriad of statistics support the fact that people expect websites, video and audio to load faster than ever before. According to Forrester Research, 36 percent of unique visitors will leave a website if it fails to load within the first three seconds. A 2009 survey by ResearchLine found that 26 percent of respondents would move to a competitor’s website if a vendor’s website failed to perform, resulting in an immediate revenue loss of 26 percent and a future loss of 15 percent. These statistics paint the picture of tangible, quantifiable consequences to performance deficits.
The traditional answer to these performance issues is the content delivery network (CDN). CDNs attempt to improve performance by placing content as close to the end-user as possible, a technique known as “edge caching.” Edge caching places expiration times on copies of the source data all over the world, and users are able to access these copies much more quickly. Some advanced CDNs also use unique algorithms and massive distributed networks to proactively identify trouble spots on the public Internet and reroute content around them.
What CDNs Do – but Better
Although CDNs have solved the delivery problem for the better part of a decade, dynamic content and steadily increasing Internet congestion have rendered the solution all but obsolete. Edge caching is best suited for static content, and the advantages evaporate when a reliable, two-way connection between the user and the host is necessary.
The exploding technologies of video conferencing, VoIP, live broadcasts, e-learning, rich media, Software-as-a-Service applications, and interactive e-media all fall under this category. By their very nature, these technologies cannot be cached.
Another problem is that CDNs rely completely on the public Internet. Unfortunately, the Internet was created as a research network and was never intended to support the incredible demands that it is now burdened with. As information traverses it, congestion, latency, packet loss and other factors delay transmission — and ultimately degrade the experience of the end-user.
How do content providers ensure that end-users (both business and customers) experience the same ease of access they did a decade ago, but with the dynamic content they want to transfer today?
Enter cloud acceleration, a natural evolutionary development of CDN. Cloud acceleration does what CDNs do, but with dynamic content capabilities, and at a performance level that has previously been unattainable. On top of this, providers are able to offer cloud acceleration at a lower price-point, because end-users are not paying for a decade’s worth of infrastructure designed and built out to enhance edge-caching capabilities.
Finally, cloud acceleration does not rely on the public Internet for delivery but rather on a fully managed, private network. The control that this aspect brings allows it not only to bandage the inherent problems of the Internet like a CDN, but also to fix them altogether.
So, how exactly does cloud acceleration achieve all of this?
To the User’s Lap
It’s important to review how CDNs function. There are three distinct “miles” that compose the path between the content origin and the end-user requesting it. The first mile is the distance between the origin server and the backbone — e.g., a T1, DSx, OCx or Ethernet connection to the Internet. The middle mile is the backbone that traverses the majority of the distance over one or more interconnected carrier backbones. Finally, the last mile is the end-user’s connection, such as a DSL, cable, wireless, or other connection.
CDNs attempt to avoid all three of these “miles” as much as possible. Since increases in distance always translate to increased latency and often greater packet loss, the solution is to place copies of the source data at Internet peering points, or in some cases, within the network of the last mile provider. This seems like an excellent solution at face level, but what isn’t common knowledge, besides the fact that not all content can be cached, is that these cache copies need to be managed.
Cloud acceleration differs in that it focuses on end-to-end delivery, rather than employing caching the way CDNs do. This is very important because, again, the dynamic content of the future requires a constant connection with the end-user. Acceleration accomplishes this connection through an elaborate behind-the-scenes process.
The first step involves opening a connection to the origin server over the first mile. The data stream needs to enter the accelerated network as soon as possible so that the optimization can begin. This is where the strength of the network becomes evident. The acceleration service provider should have a global footprint of origin capture nodes, and when coupled with routing algorithms, these capture nodes pull content onto the network at incredible speeds.
Once this is done, the content is then pushed through the highly engineered private network. As mentioned before, this “mile” is the longest leg of the journey, and is therefore where the majority of the optimization occurs. In addition to running a fully meshed MPLS-TE network at that origin capture node, infrastructure similar to a WAN optimizer will then open a tunnel across the service provider’s entire private backbone to an identical device at the edge node near the end-user. These devices optimize routing and ensure maximum throughput with processes such as window scaling, selective acknowledgement, round-trip measurement, and congestion control.
The common problem of packet loss is addressed through adaptive forward error correction, which reconstitutes lost packets at the edge node, avoiding delays that come with multiple-round-trip retransmission. Byte-level deduplication eliminates retransmission of identical bytes, and multiplexing compresses TCP handshakes over the middle mile, allowing them to be limited to the edge nodes.
The final performance-enhancing step of cloud acceleration involves taking advantage of direct peering to eyeball networks, or the last mile, so that content can be dropped back onto the Internet as close to the end-user as possible. The general rule of thumb is that if you can place the node 5 to 10 ms from the end-user, the experience will still feel like a LAN, and not be affected by Internet congestion during peak usage.
When these steps are applied, cloud acceleration is capable of placing any type of content directly in the user’s lap. Caching hassles are eliminated, and with a continuous open data stream equivalent to that of a superhighway, it is now possible to optimize VoIP, live video, interactive e-media, FTP applications, and any other new technologies that may arise in the future. Online commerce can finally keep up with the high expectations of contemporary users. How are you delivering your content?
Jonathan Hoppe is CTO of Cloud Leverage.