Sitara's SNP protocol
The Sitara Network Protocol (SNP) adds a new level of end-to-end intelligence to IP networks, including the Internet, intranets, and extranets. This end-to-end solution allows Sitara to attack the performance problems that bedevil these networks and provide a higher quality of service to traffic between the end points of a Sitara-enhanced connection.
SNP incorporates features that track network latency and packet loss and use this information to manage the receiving system's connection with the server and the transmission rate. SNP allows the server to distinguish between network congestion and random transmission errors and selectively retransmit lost data while maintaining transmission speed to maximize data throughput.
The result for users of SNP-enhanced data communications, is improved performance in IP networks. This becomes increasingly important as the Internet becomes ubiquitous within the enterprise as an alternative to private Wide Area Networks: The great advantage of using the Internet is the reduced cost of bandwidth on the public Internet. The great disadvantage of the Internet is the loss of control over the quality of service provided on that bandwidth.
Internet performance is beyond the reach of solutions that focus on the host site. Load balancers and caches, for example, may improve the efficiency of host servers but they do nothing to address congestion that drags down the performance of Web pages or intranet and extranet applications that must travel across the Internet.
The Sitara Network Protocol is a tool for managing performance host-site tools can't reach, in the Internet itself. SNP features such as source rate control, streamlined handshaking, and selective retransmission improve the reliability and consistency of Internet transmissions, which translates into a higher quality of service for all users.
SNP delivers its benefits while remaining completely compatible with all applications that use existing Internet protocols, such as TCP, and with existing Web site infrastructures and performance solutions such as load balancers, bandwidth managers, and others.
Background: what is a protocol?
A protocol is a set of rules to be followed. In medicine, for example, a treatment protocol outlines the steps to be taken in treating a particular disease or condition. In telecommunications, a protocol is the rules that connected devices follow in sending signals back and forth. In a computer network connection, there are multiple protocols at work at many levels.
The Internet supports scores of protocols that handle the communication of specific applications or tasks - ICMP for status information on addresses, FTP for file transfers, SMTP for e-mail, UDP for streaming media, and so on. All of these protocols make it possible for two end points to communicate by recognizing and following a common protocol - end points that may be contained within two programs running on a single PC, or computers halfway around the world from each other.
The World Wide Web is built on HTTP (Hypertext Transmission Protocol) at the application level (the browser), and TCP (Transmission Control Protocol) and IP (Internetworking Protocol) at the network transport level. IP is the sole routing scheme in the Internet. This universality enables worldwide connectivity, the major ingredient of the Internet's success. IP controls the addressing and delivery of information packets. Some packets carry data; others, such as "ACK" and "SYN" packets, are much smaller and signal acknowledgments and stop/start requests.
How these packets are transported and transformed into information that is useful to a Web user is the domain of a series of functions that are collectively referred to as the Internet protocol stack, which is diagrammed here.
Starting at the top, or application layer, the job of a browser is to request Web pages from remote servers and display them (. The pages arrive in the form of Hypertext Markup Language (HTML) that tells the browser how the page is laid out. A typical Web page is made up of a series of elements such as text, graphics and images, and may even include sound and video. These elements are requested individually from the remote Web server using the Hyper Text Transfer Protocol (HTTP).
The HTTP requests are passed to the IP layer through a socket interface and the transport control layer (TCP). The role of TCP is to ensure that HTTP transfers over IP actually deliver the information requested: IP is all about addresses and routing between end-points on the network. TCP is all about making sure all the data that's sent from one point is received at the other.
The bottom layers, the Data Link and Physical layers, are devoted to protocols that manage the network hardware - in the case of the Internet, this means Ethernet networking hardware and the protocols that control it.
The role of TCP
The TCP protocol was developed to control transmissions on networks that carried a fundamentally different type of traffic on different hardware under conditions that were very different from the Internet of today. The Internet began as a way for U.S. academic and defense institutions to exchange large data files. The network was not complex, traffic volume was not overwhelming, and the computers were not sophisticated. The control mechanisms of TCP were developed to conduct network transfers with the maximum reliability, consistency, and speed in this environment where reliability and computer processing power were the major constraints. This is evident in several features of the protocol intended to deal with the major factors of Internet performance - network congestion and packet loss:
Handshaking. Numerous handshakes are required for each data transfer - one computer sends a request, sets a timer, and waits for the other to acknowledge. TCP uses handshaking to ensure reliability by confirming that both computers are ready to support a connection.|
"Slow start" transmissions. TCP varies the amount of data it sends to the receiving computer. It does this to test the reliability of the connection, starting slowly and ramping up. The server begins by sending one packet, sending a handshake request for an acknowledgement, and setting a timer. If the transmission is successful the server receives an acknowledgement before the timer expires, and sends two packets, then four, and so on, increasing the number of packets sent in a group, or "window," until it fails to receive an acknowledgement from the client computer.|
Negative acknowledgement. TCP acknowledgements are not intelligent - that is, if the client didn't receive all the packets in the window it cannot tell the server which packets it missed. It can only fail to acknowledge the transmission of the entire window. The server responds to a negative acknowledgement by retransmitting all the packets it has sent since the last acknowledgement, and cutting the window size.|
"Go back n" retransmission. Because negative acknowledgement can't tell the server which packets of data weren't received, the only way TCP can recover from errors is to go back to the last acknowledged packet and resend all data packets sent since. This is called "go back n" retransmission.|
"Slow start," negative acknowledgement, and "go back n" retransmission allow TCP to manage network transmissions with the least possible communication between client and server, and with no exchange of information specific to the current connection characteristics or network conditions or the cause of any transmission errors. The price of this communications efficiency is the performance impact of the repeated handshakes, automatic "slow start" rate reductions, and retransmission of data that has already been successfully received.
Changes times, changing needs
TCP worked well for the early Internet, and the fact that it continues to work as well as it does is a tribute to the flexibility of its design. But conditions on the Internet have changed dramatically in ways that have not played to TCP's strengths. The Internet was engineered to serve a relatively small community of users on a relatively simple network making relatively few computer-to-computer connections to move relatively large files. Today's Internet is shaped by the World Wide Web - large numbers of users on an increasingly complex network making huge numbers of connections to move relatively small files These changes affect TCP's efficiency in three areas:
Throughput. Whenever TCP detects a transmission error it invokes the "slow start" mechanism to reduce the transmission rate. This may be appropriate in some cases but not in others. Originally this was done because the major transmission bottleneck was buffer overflow in the receiving machine and "slow start" gave the machine a chance to write the buffered data out to low-speed storage devices. Today a negative acknowledgement is more likely to be the result of packet loss caused by network congestion. Typically routers can't keep up with the volume of traffic moving through them and begin to drop packets, which results in a negative acknowledgement to the server. The negative acknowledgement may also be caused by latency (that is, if the packets are not dropped but delayed in transmission long enough to cause a server timeout) or by random line errors that corrupt single packets. In these cases backing off the transmission rate reduces throughput to no good purpose.|
Making and breaking connections. TCP's handshaking adds overhead to the amount of network traffic required to transmit a data object such as a Web page element or data file. When files were large this overhead was a negligible burden. But the Web has greatly increased the number of data objects transferred and decreased the typical object's size. A Web page may be composed of scores of elements - text and graphics files - each of which requires an individual connection between the client and the server. Browsers open multiple connections at the same time to speed this process along. Each connection requires a handshake to send and acknowledge a connection request and a handshake to send and acknowledge an end-of-data message. If the element is small - and many are only a packet or two of data -- the percentage of traffic devoted to empty overhead can be quite high.|
Error recovery. Negative acknowledgement signals an error, but carries no information about what caused the error or how much data was lost. TCP has only one possible response: It must "go back n" packets to the last packet acknowledged by the client and retransmit all the whole window of packets sent since. When large blocks of sequential data are lost (as was the case when receiving buffers overflowed) "go back n" retransmission was useful. But if the loss is random packets, as is likely to be the case in today's high-speed networks, retransmitting entire windows of data becomes part of the problem, rather than part of the solution. A high percentage of the data is transmitted unnecessarily, as most of the packets were received correctly. Yet these unnecessary packets add to the burden on already congested network routers. More packets are dropped, network latency rises, and the congestion feeds on itself.|
The changes made by the Web have reshaped the Internet: What was once a network with consistently low delay and roughly the same bandwidth available throughout has become a twisted fabric of network segments of varying bandwidth stitched together by routers into a congested, high-latency, high-loss network. Taming this congestion to deliver a high quality of service to users of Web content requires fundamental improvements in the structures that manage the network - in this case, the TCP protocol.
What is SNP?
Sitara has developed an end-to-end solution that addresses the components the Internet performance problem. Sitara's solution is a client-server configuration that puts intelligence at both ends of the Internet connection to optimize data transfers even when Internet performance is degraded.
The Sitara SpeedServer is installed at the Web site, adjacent to the Web servers. A SpeedServer can enhance multiple Web servers at a site, depending upon bandwidth capacity requirements. The site infrastructure is untouched, and no configuration changes are made to the Web servers or software.
The Sitara SpeedSeeker™ client is installed on the user's PC. SpeedSeeker runs on Windows 95/98 and Windows NT operating systems and does not require any changes to existing browser applications.
The Sitara solution spans the Internet from the SpeedServer on the Web site to the SpeedSeeker client on the desktop.
The SpeedSeeker and SpeedServer communicate using an extended version of the TCP protocol called Sitara Network Protocol (SNP). SNP is a full-fledged networking protocol (it is designated Protocol 109 by the Internet Assigned Numbers Authority) and Sitara has made patent filings on several aspects of its technology.
The Sitara solution is implemented by improving and extending the functions of Internet transport control. SNP shares the transport layer with TCP, as shown here. It is an arrangement that is transparent to applications such as e-mail, Telnet and others that use the Internet protocol stack for Web transfers. If the Web user's PC is running SpeedSeeker and the Web site is enhanced with SpeedServer, then both the user and the site will enjoy the benefits of Sitara's increased reliability and improved performance. If either side is not Sitara-enabled, then communications will default to TCP without the non-Sitara-enabled side even being aware of the difference.
Sitara-enabled Internet protocol stack
While SNP functions as a transport protocol just like TCP it goes considerably farther than TCP in communicating data about the connection. This allows SNP to change its behavior to suit current conditions of network latency, packet loss, and
The richness of SNP also makes it possible for the Sitara server and client to support other performance-enhancing features - Sitara's text compression, for example - and to independently gather and analyze data about the connection, and use a wide variety of options to recover from errors while maintaining throughput.
The Sitara solution uses the processing power of both the client and the server to optimize the connection in three critical areas:
1. How the SpeedSeeker connects to the SpeedServer
Before a SpeedSeeker and SpeedServer can communicate to enhance Web transfers they must become aware of each other's existence. The process builds on the same events and resources used by TCP, but takes advantage of the greater processing power of today's desktop PCs to accelerate the connection process:
When a Web browser requests a page, the request is inspected by SpeedSeeker, which takes a sequence of actions - first local, then network-based, to determine if the Web server that will return the requested page is enhanced by a SpeedServer. The key to these actions is the SpeedDirectory, a directory of Sitara-enhanced Web sites maintained by the SpeedServer on the client PC.
SpeedSeeker checks this directory for the Web server named in the request. If it is found, the entry may be either positive or negative: A negative entry indicates that the named server is not Sitara-enhanced. SpeedSeeker passes the request to TCP and the page retrieval continues as a plain Internet process. A positive entry includes the IP address of the SpeedServer that enhances the named Web server. SpeedSeeker sends a connection request to this IP address and establishes an enhanced connection with the SpeedServer.
If the named Web server isn't found in the SpeedDirectory the SpeedSeeker will send a query to a Domain Name Server (DNS), just as TCP would. But where TCP would request the IP address entered in the named server's DNS record, SNP asks for the DNS TXT resource record associated with the server.
DNS records typically link a server name, such as "www.sitara.net," to a numerical IP address like 18.104.22.168. TXT records are optional additions to the DNS entry. When a SpeedServer is installed on a Web site TXT records carrying the SpeedServer's IP address are added to the DNS entries for the Web servers enhanced by that SpeedServer.
If SpeedSeeker's request for a TXT record yields a SpeedServer's IP address, a connection request is sent to that SpeedServer. If the record is not available or doesn't include a SpeedServer address, SpeedSeeker passes the page request on for plain TCP/IP retrieval from the Web server.
Whether a SpeedServer is identified or not, the SpeedSeeker updates the SpeedDirectory, so that that subsequent requests for elements within the page, or other pages on the same Web server, will be handled as quickly as possible. (The SpeedDirectory may also be updated manually by the user, who may enter a server name or IP address and request a resolution of its status.)
2. Streamlined handshaking
Once a SpeedSeeker client connects to a SpeedServer, the transfer of data begins quickly. SNP streamlines the handshaking required to establish a connection. TCP requires two round-trips across the network before data reaches the client:
SNP cuts this network traffic in half by sending both a connection request and a retrieval request in the same packet. This means that for each connection it requires just one round-trip across the network to begin the flow of data to the client:
The SNP connection packet also contains information about the client's maximum window size and the speed of its connection to the Internet.
3. Connection management
SNP monitors the connection between the client and server, and maintains data about previous connections between the client and the server within the current session, including a figure for the latency of the connection, or Round Trip Time (RTT). SNP uses this RTT figure as the default time-out when it sends a new connection request. This can result in much closer management of connections than is possible with the arbitrary timer settings of TCP.
SNP applies similar efficiencies to closing a connection. TCP has only one close procedure: the client sends a connection close request, and the server acknowledges. If the request packet is lost, the connection must time out. If the application instructs TCP to close the connection, it must wait for this process to complete.
SNP has two close procedures. The first is the normal close procedure. The SpeedSeeker shuts down the connection in a methodical manner by issuing a close request to the server after it has determined that all the data has been received reliably. The second procedure is the abortive close procedure. The application issues a close request to the protocol and the connection to the server is terminated without waiting for all the data to be received.
SNP separates transmission rate management from error control. In TCP these two are tied. When TCP detects an error it reduces the window size - the number of data packets - it sends to the client before waiting for acknowledgement and it performs a "go back n" error recovery. SNP can distinguish between errors caused by network congestion and those that result from the random corruption of a packet. Based on this knowledge it can recover from errors while maintaining the maximum transmission of data through the connection.
1. Detecting congestion
TCP's "slow start" reduction of the window size is a sort of brute-force approach to controlling congestion in the network: if the error was caused by packets being dropped by a router, than the reduced window size slows down the flow of data through the link and gives it a chance to recover.
But the error may have been an impulse loss, the result of a transient event anywhere in the network. RF carrier transmission noise in the physical network is a typical cause of impulse errors - single-bit errors that cause packet checksums to fail and the packets to be discarded by the receiving system.
TCP's window-based congestion control has no way of distinguishing between losses caused by network congestion and impulse losses. And reducing the window size, which is the only response TCP can make, has no positive effect on error recovery, just a negative impact on connection throughput.
SNP's rate-based congestion control differentiates between congestion packet loss and impulse packet loss. It does this by continuously performing calculations that determine the type, frequency and magnitude of errors on the connection.
When the connection is established, the client reads the DTE speed of its hardware link to the network and sends that figure to the server. The server sends packets at that rate, and periodically sends a request for response. The client responds with a span list - a listing of all the packets it has received successfully or packets that were not received.
If there are data losses, the SpeedServer takes two actions:
It determines exactly which packets to re-send.|
It determines the optimal speed to send the packets by performing an analysis on the SpeedSeeker data. This will determine whether the loss was due to network congestion (indicated by the clustering of lost packets, and patterns of loss), or was impulse losses.|
If congestion is the cause of the loss, the SpeedServer reduces the transmission rate at which it sends the recovery packets and resumes sending new packets. If congestion is not the problem, the SpeedServer maintains the transmission rate. This more flexible approach avoids the slowdowns and inefficiencies of the TCP "slow start" algorithm.
Otherwise the transmission rate stays up while the server performs the steps of error recovery.
2. Protocol fairness
Extensive testing has shown that SNP interacts fairly with and does not take priority over other protocols such as TCP in highly congested networks. SNP's fairness is the result of the way its rate-control mechanism manages multiple connections between the same client and server. The server identifies the client by IP address, and knows its receiving capacity. The server never sends more data than the client can handle. If multiple connections are opened between the server and a single IP address (as there usually are for Web pages) the server "round-robins" among the connections, sending some data from each in succession, never exceeding the client's capacity.
SNP uses a "selective retransmission" mechanism that greatly improves the efficiency of recovering from data-loss errors.
In TCP's "go back n" error-recovery algorithm, the server has no knowledge of which data was lost. It knows only which data was correctly received and acknowledged by the client. So when an error is signaled by a negative acknowledgement (NAK) from the client, the server determines what data was included in the last acknowledged (ACK) window, and re-sends all of the data sent since -- even if subsequent packets were correctly received. This is can obviously be very inefficient, as the diagram below indicates:
In SNP, the server and the client both maintain a record of all packets transmitted and received. When the span list sent by the client indicates that an individual packet was not received the server retransmits it:
The Sitara Network Protocol brings a higher level of consistency, reliability, and speed to Internet transfers by bringing end-to-end intelligence to network transactions. Of all the connection-oriented protocols, SNP is uniquely aware of the receiving and transmitting end-points of the transaction, of their capabilities and status, and of conditions in the Internet that affect their connection.
SNP's end-to-end intelligence rests on three factors:
Its ability to collect information on the capabilities of the end-points and the performance of data transfers. The SpeedServer knows the client's receive rate, for example. And SNP communicates span loss data and round-trip times. This sets SNP apart from protocols that are completely unaware of what's at the other end of the wire, and what's happening in between.|
Its ability infer network conditions from computational analysis of that information. By characterizing the distribution and frequency of packet losses SNP can distinguish between network congestion and random impulse losses.|
Its ability to employ a variety of responses. Even if TCP could discriminate between impulse losses and router congestion, it has only one response it can make - cut the window size and go back to the last acknowledgement.|
The combination of these three gives SNP the ability to manage for optimal transfer rate, to manage congestion separately from rate, and to recover from errors with unequalled efficiency. The goal of other protocols is to make connections. Its end-to-end intelligence lets SNP function at a higher level to make connections the best they can be.
The Sitara Solution |
©2000 Sitara Networks, Inc. All rights reserved. email@example.com