[Bloat] tcp loss probes in linux 3.10
Jonathan Morton
chromatix99 at gmail.com
Thu May 9 19:43:26 EDT 2013
On 10 May, 2013, at 2:01 am, Dave Taht wrote:
> I have to admit that the 96% figure strongly suggests some degree of bufferbloat in the tested mix here. I am curious however as to what other causes there might be, ranging from tcp bugs to glitches in the matrix?
For the specific case of YouTube servers, the fact that the video stream is "bottled" so that only a few seconds of buffering-ahead are available to the client probably plays a role. Sections of the file are released in bursts, filling any buffers en route. There is a good chance that the end of a burst is often consumed by a subsequent tail-drop loss episode. If the time between release bursts exceeds the RTO, then the RTO will be the only information (by default) reaching the server about the loss event.
The bursty release is worse for TCP than a free-streaming flow, because the latter would be largely self-timed by the steady return of ACKs, with the congestion window remaining full most of the time - so only the bottleneck queue fills up and overflows. When a burst is released, however, other intermediate queues can also fill up and overflow, resulting in a larger number of lost packets - and yet a larger congestion window might result.
So it is still bufferbloat, but there can be strange and unintended interactions with some systems.
- Jonathan Morton
More information about the Bloat
mailing list