[Bloat] DSLReports Speed Test has latency measurement built-in
Jonathan Morton
chromatix99 at gmail.com
Tue Apr 21 05:37:26 EDT 2015
I would explain it a bit differently to David. There are a lot of
interrelated components and concepts in TCP, and its sometimes hard to see
which ones are relevant in a given situation.
The key insight though is that there are two windows which are maintained
by the sender and receiver respectively, and data can only be sent if it
fits into BOTH windows. The receive window is effectively set by that
sysctl, and the congestion window (maintained by the sender) is the one
that changes dynamically.
The correct size of both windows is the bandwidth delay product of the path
between the two hosts. However, this size varies, so you can't set a single
size which works in all our even most situations. The general approach that
has the best chance of working is to set the receive window large and rely
on the congestion window to adapt.
Incidentally, 200ms at say 2Mbps gives a BDP of about 40KB.
The problem with that is that in most networks today, there is insufficient
information for the congestion window to find its ideal size. It will grow
until it receives an unambiguous congestion signal, typically a lost packet
or ECN flag. But that will most likely occur on queue overflow at the
bottleneck, and due to the resulting induced delay, the sender will have
been overdosing that queue for a while before it gets the signal to back
off - so probably a whole bunch of packets got lost in the meantime. Then,
after transmitting the lost packets, the sender has to wait for the
receiver to catch up with the smaller congestion window before it can
resume.
Meanwhile, the receiver can't deliver any of the data it's receiving
because the lost packets belong in front of it. If you've ever noticed a
transfer that seems to stall and then suddenly catch up, that's due to a
lost packet and retransmission. The effect is known as "head of line
blocking", and can be used to detect packet loss at the application layer.
Ironically, most hardware designers will tell you that buffers are meant to
smooth data delivery. It's true, but only when it doesn't overflow - and
TCP will always overflow a dumb queue if allowed to.
Reducing the receive window, to a value below the native BDP of the path
plus the bottleneck queue length, can be used as a crude way to prevent the
bottleneck queue from overflowing. Then, the congestion window will grow to
the receive window size and stay there, and TCP will enter a steady state
where every ack results in the next packet(s) being sent. (Most receivers
won't send an ack for every received packet, as long as none are missing.)
However, running multiple flows in parallel using a receive window tuned
for one flow will double the data in flight, and the queue may once again
overflow. If you look only at aggregate throughput, you might not notice
this because parallel TCPs tend to fill in each others' gaps. But the
individual flow throughputs will show the same "head of line blocking"
effect.
- Jonathan Morton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20150421/991d29c4/attachment-0003.html>
More information about the Bloat
mailing list