[Bloat] TCP congestion detection - random thoughts

Benjamin Cronce bcronce at gmail.com
Sun Jun 21 15:34:03 EDT 2015


I'll have to find some time to look at those links.

I guess I wasn't thinking of using latency to determine the rate but only
nudge the rate, but use it only to determine to back off for a bit to allow
the buffer to empty, but maintain the same rate overall. So if you have a
60ms RTT but a 50ms min RTT, keep your current rate but skip one segment
periodically or maybe only make it a smaller segment like 1/2 the max size,
but not often enough to make a large difference. Maybe allow latency to
only reduce the current target rate by no more than 5% or something.

I guess the starting pattern could be something like this
1) build up, 2x segments per ACK
2) make sure ACKd bytes increases at the same rate as bytes sent
3) once bytes ACKd stops increasing, attempt to reduce segment size or skip
a packet until current RTT is near the target RTT, but no more 5%

Of course this is only good for discovering the current free bandwidth.
There needs to be a way to periodically probe to discover new free
bandwidth. The idea was the max detected should rarely change, so don't be
aggresive when probing near the max, but when below the max, attempt to
find free bandwidth by adding additional segments and seeing if the ACKd
byte rate increases. If it does, start growing.

I don't have an engineering background, just playing with thoughts.


[Bloat] TCP congestion detection - random thoughts
>
> Alan Jenkins alan.christopher.jenkins at gmail.com
> Sun Jun 21 10:05:52 PDT 2015
> Previous message: [Bloat] TCP congestion detection - random thoughts
> Next message: [Bloat] TCP congestion detection - random thoughts
> Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
> Hi Ben
>
> Some possible Sunday reading relating to these thoughts :).
>
> https://lwn.net/Articles/645115/ "Delay-gradient congestion control"
> [2015, Linux partial implementation]
>
> our Dave's reply to a comment:
>
> https://lwn.net/Articles/647322/
>
> Quote "there is a huge bias (based on the experimental evidence) that
> classic delay based tcps lost out to loss based in an undeployable
fashion"
>
> - not to argue the quote either way.  But there's some implicit
> references there that are relevant.  Primarily a well-documented result
> on TCP Vegas.  AIUI Vegas uses increased delay as well as loss/marks as
> a congestion signal.  As a result, it gets a lower share of the
> bottleneck bandwidth when competing with other TCPs. Secondly uTP has a
> latency (increase) target (of 100ms :p), _deliberately_ to de-prioritize
> itself.  (This is called LEDBAT and has also been implemented as a TCP).
>
> Alan
>
>
> On 21/06/15 17:19, Benjamin Cronce wrote:
> > Just a random Sunday morning thought that has probably already been
> > thought of before, but I currently can't think of hearing it before.
> >
> > My understanding of most TCP congestion control algorithms is they
> > primarily watch for drops, but drops are indicated by the receiving
> > party via ACKs. The issue with this is TCP keeps pushing more data
> > into the window until a drop is signaled, even if the rate received is
> > not increased. What if the sending TCP also monitors rate received and
> > backs off cramming more segments into a window if the received rate
> > does not increase.
> >
> > Two things to measure this. RTT which is part of TCP statistics
> > already and the rate at which bytes are ACKed. If you double the
> > number of segments being sent, but in a time frame relative to the
> > RTT, you do not see a meaningful increase in the rate at which bytes
> > are being ACKed, may want to back off.
> >
> > It just seems to me that if you have a 50ms RTT and 10 seconds of
> > bufferbloat, TCP is cramming data down the path with no care in the
> > world about how quickly data is actually getting ACKed, it's just
> > waiting for the first segment to get dropped, which would never happen
> > in an infinitely buffered network.
> >
> > TCP should be able to keep state that tracks the minimum RTT and
> > maximum ACK rate. Between these two, it should not be able to go over
> > the max path rate except when attempting to probe for a new max or
> > min. Min RTT is probably a good target because path latency should be
> > relatively static, however path free-bandwidth is not static. The
> > desirable number of segments in flight would need to change but would
> > be bounded by the max.
> >
> > Of course naggle type algorithms can mess with this because when ACKs
> > occur is no longer based entirely when a segment is received, but also
> > by some other additional amount of time. If you assume that naggle
> > will coalesce N segments into a single ACK, then you need to add to
> > the RTT, the amount of time at the current PPS, how long until you
> > expect another ACK assuming N number of segments will be coalesced.
> > This would be even important for low latency low bandwidth paths.
> > Coalesce information could be assumed, negotiated, or inferred.
> > Negotiated would be best.
> >
> > Anyway, just some random Sunday thoughts.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20150621/2253446c/attachment-0002.html>


More information about the Bloat mailing list