For marketers
who love technology
Home » , , , » Top-5 TCP throughput optimization methods

Top-5 TCP throughput optimization methods

Imagine that you want to transfer a large amount of data efficiently and reliably. You will need a reliable transport protocol. There is a big chance that you will rely on TCP, the predominant layer 4 protocol of the Internet. The problem with TCP (it's not a bug, it's a feature), is that it does not use all the available bandwidth. So, what's the issue, and how to solve it?

TCP 3-way handshake (credit: wikipedia)

Top-5 TCP throughput optimization methods

How can we use TCP and still send data at maximum throughput?

The easiest way to improve TCP throughput is to open several TCP connections in parallel. That way, packet losses will have much less impact on the global throughput. Let's consider a simple example to explain why several TCP connections use the available bandwidth more efficiently than a single one.
  • With a single TCP connection: If the link rate is 100Mb/s. Dividing the throughput by 2 because of a packet loss bring us to a speed of only 100/2=50Mb/s
  • With three TCP connections: If the link rate is 100Mb/s and every connection uses 33Mb/s Dividing the throughput of a connection by 2 because of a packet loss bring us to a speed of only 33/2 + 33 +33=82Mb/s. That's better!!
There are also more complicated ways than using several connections to optimize TCP performance: these other methods are referred to as TCP tuning. They involve following ideas.
  • Start sending data during the three way handshake
  • Increase the size of the initial congestion window
  • Expand the congestion window more rapidly in the slow start and in the congestion avoidance phases
  • Decrease the back off factor when a packet loss is detected
  • Pick a TCP flavor that is more efficient
  • Use hardware optimizations (e.g., TCP Segmentation Offload (TSO))
  • ...
It's not just theory: most content delivery networks (CDNs) use that kind of optimizations to improve the speed of content downloads over the Internet.

Why TCP congestion avoidance algorithm limits the throughput

TCP is designed to maintain some fairness among the users of a shared infrastructure: the Internet. This means that when there are not enough resources (the pipes are too small) to carry the traffic of all the users, for instance during peak hour, TCP limits the speed of everyone's communication, so that everyone gets its share of the bandwidth.

How does it work? TCP is complicated, so, I'll explain the general principles at high level. TCP has a "window size" for its "congestion window". The window size describes the amount of data that a user has the right to have in transit in the network at a given moment. For TCP, the "data in transit" is the packets that have been sent, but for which TCP has not yet received an acknowledgement. The window size increases progressively (a phase named "slow start") at the beginning of the TCP connection. This means that TCP sends data faster and faster. Then, when a packet loss (or a timeout) occurs, TCP divides the window size by a factor, for instance 2. This means that TCP's throughput abruptly decreases. This algorithm is named additive-increase, multiple-decrease.

To illustrate these principles, I have borrowed the pictures below from David X. Wei and Prof. Pei Cao who simulated various flavours of TCP protocol in the network simulator NS-2 on this very interesting webpage.

On the figure below, you can see the evolution over time of the congestion window in TCP Reno implementation. Ths x axis is the time in seconds, and the y axis is the congestion window size.

The additive-increase phase is clearly visible on the left of the figure (when the congestion window increases) and the multiple-decrease phase too (the vertical lines where the congestion window abruptly decreases).

"Throughput" measures the amount of data that arrives to the destination per unit of time. TCP impact on the throughput is visible on the figure below. In this figure, the x axis is the time in seconds and the y axis is the throughput. You can see that in an empty network, the available bandwidth is not completely used during the first 80 seconds (that's long). Then, at 100 second, TCP detects the first packet loss and TCP algorithm divides the throughput by 2.
What you should remind is that: TCP throughput has a saw-like pattern and TCP does not use all available bandwidth.


    SHARE

    About Gilles

    0 comments :

    Post a Comment