[Bloat] TCP congestion detection - random thoughts
Alan Jenkins
alan.christopher.jenkins at gmail.com
Sun Jun 21 13:05:52 EDT 2015
Hi Ben
Some possible Sunday reading relating to these thoughts :).
https://lwn.net/Articles/645115/ "Delay-gradient congestion control"
[2015, Linux partial implementation]
our Dave's reply to a comment:
https://lwn.net/Articles/647322/
Quote "there is a huge bias (based on the experimental evidence) that
classic delay based tcps lost out to loss based in an undeployable fashion"
- not to argue the quote either way. But there's some implicit
references there that are relevant. Primarily a well-documented result
on TCP Vegas. AIUI Vegas uses increased delay as well as loss/marks as
a congestion signal. As a result, it gets a lower share of the
bottleneck bandwidth when competing with other TCPs. Secondly uTP has a
latency (increase) target (of 100ms :p), _deliberately_ to de-prioritize
itself. (This is called LEDBAT and has also been implemented as a TCP).
Alan
On 21/06/15 17:19, Benjamin Cronce wrote:
> Just a random Sunday morning thought that has probably already been
> thought of before, but I currently can't think of hearing it before.
>
> My understanding of most TCP congestion control algorithms is they
> primarily watch for drops, but drops are indicated by the receiving
> party via ACKs. The issue with this is TCP keeps pushing more data
> into the window until a drop is signaled, even if the rate received is
> not increased. What if the sending TCP also monitors rate received and
> backs off cramming more segments into a window if the received rate
> does not increase.
>
> Two things to measure this. RTT which is part of TCP statistics
> already and the rate at which bytes are ACKed. If you double the
> number of segments being sent, but in a time frame relative to the
> RTT, you do not see a meaningful increase in the rate at which bytes
> are being ACKed, may want to back off.
>
> It just seems to me that if you have a 50ms RTT and 10 seconds of
> bufferbloat, TCP is cramming data down the path with no care in the
> world about how quickly data is actually getting ACKed, it's just
> waiting for the first segment to get dropped, which would never happen
> in an infinitely buffered network.
>
> TCP should be able to keep state that tracks the minimum RTT and
> maximum ACK rate. Between these two, it should not be able to go over
> the max path rate except when attempting to probe for a new max or
> min. Min RTT is probably a good target because path latency should be
> relatively static, however path free-bandwidth is not static. The
> desirable number of segments in flight would need to change but would
> be bounded by the max.
>
> Of course naggle type algorithms can mess with this because when ACKs
> occur is no longer based entirely when a segment is received, but also
> by some other additional amount of time. If you assume that naggle
> will coalesce N segments into a single ACK, then you need to add to
> the RTT, the amount of time at the current PPS, how long until you
> expect another ACK assuming N number of segments will be coalesced.
> This would be even important for low latency low bandwidth paths.
> Coalesce information could be assumed, negotiated, or inferred.
> Negotiated would be best.
>
> Anyway, just some random Sunday thoughts.
>
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20150621/890d397a/attachment-0003.html>
More information about the Bloat
mailing list