[Bloat] Fair queuing detection for congestion control

Dave Taht dave.taht at gmail.com
Thu Oct 13 10:17:11 EDT 2022


On Wed, Oct 12, 2022 at 10:35 AM Maximilian Bachl
<maximilian.bachl at gmail.com> wrote:
>
> Building upon the ideas and advice I received, I simplified the whole concept and updated the preprint (https://arxiv.org/abs/2206.10561). The new approach is somewhat similar to what you propose in point 3). True negative rate (correctly detecting the absence of FQ) is now >99%; True positive rate is >95% (correctly detecting the presence of FQ (fq_codel and fq)). It can also detect if the bottleneck link changes during a flow from FQ to non-FQ and vice versa.

That is really marvelous detection work, worth leveraging.

> A new concept is that each application can choose its maximum allowed delay independently if there's FQ. A cloud gaming application might choose to not allow more than 5 ms to keep latency minimal, while a video chat application might allow 25 ms to achieve higher throughput. Thus, each application can choose its own tradeoff between throughput and delay. Also, applications can measure how large the base delay is and, if the base delay is very low (because the other host is close by), they can allow more queuing delay. For example, if the base delay between two hosts is just 5 ms, it could be ok to add another 45 ms of queuing to have a combined delay of 50 ms. Because the allowed queuing delay is quite high, throughput is maximized.

As promising as this addition is to quic, I have to take umbrage with
the idea that "an application can pick the right amount of buffering."

First: The ideal amount of network buffering is... zero. Why would an
application want to have excess buffering? There isn't much of a
tradeoff between throughput and delay.

FQ nowadays (nearly) everywhere makes it possible for delay based
transports to "just work". Once FQ is found... an application can
quickly probe for the right rate and then just motor along at some
rate (well) below that. A VR or AR application, especially, becomes
immune to the jitter and latency induced by other flows on the link,
and mostly immune to the sudden bandwidth changes you can get from
wireless links.

 You can probe for more bandwidth periodically via a flow you don't care about.

There's a pretty big knee in the bandwidth curve for wifi, I'll admit
(aggregation is responsible for 60% or so of the bandwidth), but even
then you only need an extra 5ms... and if your application doesn't
need all that bandwidth,
it's better to target 0.

Secondly, the AQM in fq_codel and cake aim for a 5ms target. It's
presently a bit larger in the wifi implementations (20ms in the field,
8ms in testing), so if you aim for buffering larger than that, you
will get drops or marks from those algorithms starting at 100ms after
you consistently exceed the target. You can (and probably should) be
using lossless ECN marks instead, which (if you really want buffering
for some reason), will just send an ever increasing number of marks
back to the sender if they exceed the locally configured target, which
I guess is a useful signal, but at least it doesn't drop packets.

The circumstances where an application might want more than 5ms of
delay from a FQ'd network seem few. It's putting the cart before the
hearse.



https://www.linkedin.com/posts/maxiereynolds_capacityeurope-datacenters-subseaconnectivity-activity-6986319233676713984-LwY3?utm_source=share&utm_medium=member_desktop
>
>
> On Sun, Jul 3, 2022 at 4:49 PM Dave Taht <dave.taht at gmail.com> wrote:
>>
>> Hey, good start to my saturday!
>>
>> 1) Apple's fq_"codel" implementation did not actually implement the
>> codel portion of the algorithm when I last checked last year. Doesn't
>> matter what you set the target to.
>>
>> 2) fq_codel has a detectable (IMHO, have not tried) phase where the
>> "sparse flow optimization" allows non queue building flows to bypass
>> the queue building
>> flows entirely. See attached. fq-pie, also. Cake also has this, but
>> with the addition of per host FQ.
>>
>> However to detect it, requires sending packets on an interval smaller
>> than the codel quantum. Most (all!?) TCP implementations, even the
>> paced ones, send 2 1514 packets back to back, so you get an ack back
>> on servicing either the first or second one. Sending individual TCP
>> packets paced, and bunching them up selectively should also oscillate
>> around the queue width. (width = number of queue building flows,
>> depth, the depth of the queue). The codel quantum defaults to 1514
>> bytes but is frequently autoscaled to less at low bandwidths.
>>
>> 3) It is also possible, (IMHO), to send a small secondary flow
>> isochronously as a "clock" and observe the width and depth of the
>> queue that way.
>>
>> 4) You can use a fq_codel RFC3168 compliant implementation to send
>> back a CE, which is (presently) a fairly reliable signal of fq_codel
>> on the path. A reduction in *pacing* different from what the RFC3168
>> behavior is (reduction by half), would be interesting.
>>
>> Thx for this today! A principal observation of the BBR paper was that
>> you cannot measure for latency and bandwidth *at the same time* in a
>> single and you showing, in a FQ'd environment, that you can, I don't
>> remember seeing elsewhere (but I'm sure someone will correct me).
>>
>> On Sun, Jul 3, 2022 at 7:16 AM Maximilian Bachl via Bloat
>> <bloat at lists.bufferbloat.net> wrote:
>> >
>> > Hi Sebastian,
>> >
>> > Thank you for your suggestions.
>> >
>> > Regarding
>> > a) I slightly modified the algorithm to make it work better with the small 5 ms threshold. I updated the paper on arXiv; it should be online by Tuesday morning Central European Time. Detection accuracy for Linux's fq_codel is quite high (high 90s) but it doesn't work that well with small bandwidths (<=10 Mbit/s).
>> > b) that's a good suggestion. I'm thinking how to do it best since also every experiment with every RTT/bandwidth was repeated and I'm not sure how to make a CDF that includes the RTTs/bandwidths and the repetitions.
>> > c) I guess for every experiment with pfifo, the resulting accuracy is a true negative rate, while for every experiment with fq* the resulting accuracy is a true positive rate. I updated the paper to include these terms to make it clearer. Summarizing, the true negative rate is 100%, the true positive rate for fq is >= 95% and for fq_codel it's also in that range except for low bandwidths.
>> >
>> > In case you're interested in reliable FQ detection but not in the combination of FQ detection and congestion control, I co-authored another paper which uses a different FQ detection method, which is more robust but has the disadvantage of causing packet loss (Detecting Fair Queuing for Better Congestion Control (https://arxiv.org/abs/2010.08362)).
>> >
>> > Regards,
>> > Max
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat at lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC


More information about the Bloat mailing list