General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] Fair queuing detection for congestion control
@ 2022-06-27 16:23 Maximilian Bachl
  2022-07-01  9:37 ` Sebastian Moeller
  0 siblings, 1 reply; 7+ messages in thread
From: Maximilian Bachl @ 2022-06-27 16:23 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 382 bytes --]

This paper (pre-print)
https://arxiv.org/abs/2206.10561 proposes a mechanism to monitor the
presence of FQ continuously during a flow’s lifetime. This can be used to
change the congestion control depending on the presence of FQ.

Furthermore, the paper argues that the presence of FQ can be considered a
congestion signal: Only if there’s congestion, FQ can be detected.

[-- Attachment #2: Type: text/html, Size: 547 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] Fair queuing detection for congestion control
  2022-06-27 16:23 [Bloat] Fair queuing detection for congestion control Maximilian Bachl
@ 2022-07-01  9:37 ` Sebastian Moeller
  2022-07-03 14:16   ` Maximilian Bachl
  0 siblings, 1 reply; 7+ messages in thread
From: Sebastian Moeller @ 2022-07-01  9:37 UTC (permalink / raw)
  To: Maximilian Bachl; +Cc: bloat

Hi Maximilian,

I read the following:
"D. Other variants of fair queuing

We also evaluated the performance of our fair queuing detection on a bottleneck managed by fq codel [5]. We chose a default target queuing delay of 10ms following Apple’s implementation3 because we argue that Apple probably spent a considerable amount of time fine-tuning their implementation and came to the conclusion that 10 ms work best as the default target delay."


And wonder whether you could:
a) repeat that experiment with fq_codel's defaults of 100ms interval and 5ms target using the Linux implementation. I am not saying Apple might not have a decent rationale for their choice, but as far as I can tell they have not communicated that rationale. The Linux defaults however are explained relatively well in e.g. fq_codel's IETF RFC.
b) produce some CDF plots that show the detection accuracy for the different RTTs and rates (you can probably combine either all RTTs or al rates into one plot)
c) maybe use signal detection theory terms to show the performance in terms that include false positive and false negative classifications?

Regards
	Sebastian


> On Jun 27, 2022, at 18:23, Maximilian Bachl via Bloat <bloat@lists.bufferbloat.net> wrote:
> 
> This paper (pre-print) 
> https://arxiv.org/abs/2206.10561 proposes a mechanism to monitor the presence of FQ continuously during a flow’s lifetime. This can be used to change the congestion control depending on the presence of FQ.
> 
> Furthermore, the paper argues that the presence of FQ can be considered a congestion signal: Only if there’s congestion, FQ can be detected. 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] Fair queuing detection for congestion control
  2022-07-01  9:37 ` Sebastian Moeller
@ 2022-07-03 14:16   ` Maximilian Bachl
  2022-07-03 14:49     ` Dave Taht
  0 siblings, 1 reply; 7+ messages in thread
From: Maximilian Bachl @ 2022-07-03 14:16 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 1346 bytes --]

Hi Sebastian,

Thank you for your suggestions.

Regarding
a) I slightly modified the algorithm to make it work better with the small
5 ms threshold. I updated the paper on arXiv; it should be online by
Tuesday morning Central European Time. Detection accuracy for
Linux's fq_codel is quite high (high 90s) but it doesn't work that well
with small bandwidths (<=10 Mbit/s).
b) that's a good suggestion. I'm thinking how to do it best since also
every experiment with every RTT/bandwidth was repeated and I'm not sure how
to make a CDF that includes the RTTs/bandwidths and the repetitions.
c) I guess for every experiment with pfifo, the resulting accuracy is a
true negative rate, while for every experiment with fq* the resulting
accuracy is a true positive rate. I updated the paper to include these
terms to make it clearer. Summarizing, the true negative rate is 100%, the
true positive rate for fq is >= 95% and for fq_codel it's also in that
range except for low bandwidths.

In case you're interested in reliable FQ detection but not in the
combination of FQ detection and congestion control, I co-authored another
paper which uses a different FQ detection method, which is more robust but
has the disadvantage of causing packet loss (Detecting Fair Queuing for
Better Congestion Control (https://arxiv.org/abs/2010.08362)).

Regards,
Max

[-- Attachment #2: Type: text/html, Size: 1656 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] Fair queuing detection for congestion control
  2022-07-03 14:16   ` Maximilian Bachl
@ 2022-07-03 14:49     ` Dave Taht
  2022-10-12 17:35       ` Maximilian Bachl
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Taht @ 2022-07-03 14:49 UTC (permalink / raw)
  To: Maximilian Bachl; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 3475 bytes --]

Hey, good start to my saturday!

1) Apple's fq_"codel" implementation did not actually implement the
codel portion of the algorithm when I last checked last year. Doesn't
matter what you set the target to.

2) fq_codel has a detectable (IMHO, have not tried) phase where the
"sparse flow optimization" allows non queue building flows to bypass
the queue building
flows entirely. See attached. fq-pie, also. Cake also has this, but
with the addition of per host FQ.

However to detect it, requires sending packets on an interval smaller
than the codel quantum. Most (all!?) TCP implementations, even the
paced ones, send 2 1514 packets back to back, so you get an ack back
on servicing either the first or second one. Sending individual TCP
packets paced, and bunching them up selectively should also oscillate
around the queue width. (width = number of queue building flows,
depth, the depth of the queue). The codel quantum defaults to 1514
bytes but is frequently autoscaled to less at low bandwidths.

3) It is also possible, (IMHO), to send a small secondary flow
isochronously as a "clock" and observe the width and depth of the
queue that way.

4) You can use a fq_codel RFC3168 compliant implementation to send
back a CE, which is (presently) a fairly reliable signal of fq_codel
on the path. A reduction in *pacing* different from what the RFC3168
behavior is (reduction by half), would be interesting.

Thx for this today! A principal observation of the BBR paper was that
you cannot measure for latency and bandwidth *at the same time* in a
single and you showing, in a FQ'd environment, that you can, I don't
remember seeing elsewhere (but I'm sure someone will correct me).

On Sun, Jul 3, 2022 at 7:16 AM Maximilian Bachl via Bloat
<bloat@lists.bufferbloat.net> wrote:
>
> Hi Sebastian,
>
> Thank you for your suggestions.
>
> Regarding
> a) I slightly modified the algorithm to make it work better with the small 5 ms threshold. I updated the paper on arXiv; it should be online by Tuesday morning Central European Time. Detection accuracy for Linux's fq_codel is quite high (high 90s) but it doesn't work that well with small bandwidths (<=10 Mbit/s).
> b) that's a good suggestion. I'm thinking how to do it best since also every experiment with every RTT/bandwidth was repeated and I'm not sure how to make a CDF that includes the RTTs/bandwidths and the repetitions.
> c) I guess for every experiment with pfifo, the resulting accuracy is a true negative rate, while for every experiment with fq* the resulting accuracy is a true positive rate. I updated the paper to include these terms to make it clearer. Summarizing, the true negative rate is 100%, the true positive rate for fq is >= 95% and for fq_codel it's also in that range except for low bandwidths.
>
> In case you're interested in reliable FQ detection but not in the combination of FQ detection and congestion control, I co-authored another paper which uses a different FQ detection method, which is more robust but has the disadvantage of causing packet loss (Detecting Fair Queuing for Better Congestion Control (https://arxiv.org/abs/2010.08362)).
>
> Regards,
> Max
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC

[-- Attachment #2: Analysing_the_Latency_of_Sparse_Flows_in_the_FQ-Co.pdf --]
[-- Type: application/pdf, Size: 507010 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] Fair queuing detection for congestion control
  2022-07-03 14:49     ` Dave Taht
@ 2022-10-12 17:35       ` Maximilian Bachl
  2022-10-12 18:16         ` Dave Taht
  2022-10-13 14:17         ` Dave Taht
  0 siblings, 2 replies; 7+ messages in thread
From: Maximilian Bachl @ 2022-10-12 17:35 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 4947 bytes --]

Building upon the ideas and advice I received, I simplified the whole
concept and updated the preprint (https://arxiv.org/abs/2206.10561). The
new approach is somewhat similar to what you propose in point 3). True
negative rate (correctly detecting the absence of FQ) is now >99%; True
positive rate is >95% (correctly detecting the presence of FQ (fq_codel and
fq)). It can also detect if the bottleneck link changes during a flow from
FQ to non-FQ and vice versa.

A new concept is that each application can choose its maximum allowed delay
independently if there's FQ. A cloud gaming application might choose to not
allow more than 5 ms to keep latency minimal, while a video chat
application might allow 25 ms to achieve higher throughput. Thus, each
application can choose its own tradeoff between throughput and delay. Also,
applications can measure how large the base delay is and, if the base delay
is very low (because the other host is close by), they can allow more
queuing delay. For example, if the base delay between two hosts is just 5
ms, it could be ok to add another 45 ms of queuing to have a combined delay
of 50 ms. Because the allowed queuing delay is quite high, throughput is
maximized.



On Sun, Jul 3, 2022 at 4:49 PM Dave Taht <dave.taht@gmail.com> wrote:

> Hey, good start to my saturday!
>
> 1) Apple's fq_"codel" implementation did not actually implement the
> codel portion of the algorithm when I last checked last year. Doesn't
> matter what you set the target to.
>
> 2) fq_codel has a detectable (IMHO, have not tried) phase where the
> "sparse flow optimization" allows non queue building flows to bypass
> the queue building
> flows entirely. See attached. fq-pie, also. Cake also has this, but
> with the addition of per host FQ.
>
> However to detect it, requires sending packets on an interval smaller
> than the codel quantum. Most (all!?) TCP implementations, even the
> paced ones, send 2 1514 packets back to back, so you get an ack back
> on servicing either the first or second one. Sending individual TCP
> packets paced, and bunching them up selectively should also oscillate
> around the queue width. (width = number of queue building flows,
> depth, the depth of the queue). The codel quantum defaults to 1514
> bytes but is frequently autoscaled to less at low bandwidths.
>
> 3) It is also possible, (IMHO), to send a small secondary flow
> isochronously as a "clock" and observe the width and depth of the
> queue that way.
>
> 4) You can use a fq_codel RFC3168 compliant implementation to send
> back a CE, which is (presently) a fairly reliable signal of fq_codel
> on the path. A reduction in *pacing* different from what the RFC3168
> behavior is (reduction by half), would be interesting.
>
> Thx for this today! A principal observation of the BBR paper was that
> you cannot measure for latency and bandwidth *at the same time* in a
> single and you showing, in a FQ'd environment, that you can, I don't
> remember seeing elsewhere (but I'm sure someone will correct me).
>
> On Sun, Jul 3, 2022 at 7:16 AM Maximilian Bachl via Bloat
> <bloat@lists.bufferbloat.net> wrote:
> >
> > Hi Sebastian,
> >
> > Thank you for your suggestions.
> >
> > Regarding
> > a) I slightly modified the algorithm to make it work better with the
> small 5 ms threshold. I updated the paper on arXiv; it should be online by
> Tuesday morning Central European Time. Detection accuracy for Linux's
> fq_codel is quite high (high 90s) but it doesn't work that well with small
> bandwidths (<=10 Mbit/s).
> > b) that's a good suggestion. I'm thinking how to do it best since also
> every experiment with every RTT/bandwidth was repeated and I'm not sure how
> to make a CDF that includes the RTTs/bandwidths and the repetitions.
> > c) I guess for every experiment with pfifo, the resulting accuracy is a
> true negative rate, while for every experiment with fq* the resulting
> accuracy is a true positive rate. I updated the paper to include these
> terms to make it clearer. Summarizing, the true negative rate is 100%, the
> true positive rate for fq is >= 95% and for fq_codel it's also in that
> range except for low bandwidths.
> >
> > In case you're interested in reliable FQ detection but not in the
> combination of FQ detection and congestion control, I co-authored another
> paper which uses a different FQ detection method, which is more robust but
> has the disadvantage of causing packet loss (Detecting Fair Queuing for
> Better Congestion Control (https://arxiv.org/abs/2010.08362)).
> >
> > Regards,
> > Max
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
>

[-- Attachment #2: Type: text/html, Size: 6075 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] Fair queuing detection for congestion control
  2022-10-12 17:35       ` Maximilian Bachl
@ 2022-10-12 18:16         ` Dave Taht
  2022-10-13 14:17         ` Dave Taht
  1 sibling, 0 replies; 7+ messages in thread
From: Dave Taht @ 2022-10-12 18:16 UTC (permalink / raw)
  To: Maximilian Bachl; +Cc: bloat

LGTM!!! I hope you plan to submit this somewhere, usenix perhaps?

On Wed, Oct 12, 2022 at 10:35 AM Maximilian Bachl
<maximilian.bachl@gmail.com> wrote:
>
> Building upon the ideas and advice I received, I simplified the whole concept and updated the preprint (https://arxiv.org/abs/2206.10561). The new approach is somewhat similar to what you propose in point 3). True negative rate (correctly detecting the absence of FQ) is now >99%; True positive rate is >95% (correctly detecting the presence of FQ (fq_codel and fq)). It can also detect if the bottleneck link changes during a flow from FQ to non-FQ and vice versa.
>
> A new concept is that each application can choose its maximum allowed delay independently if there's FQ. A cloud gaming application might choose to not allow more than 5 ms to keep latency minimal, while a video chat application might allow 25 ms to achieve higher throughput. Thus, each application can choose its own tradeoff between throughput and delay. Also, applications can measure how large the base delay is and, if the base delay is very low (because the other host is close by), they can allow more queuing delay. For example, if the base delay between two hosts is just 5 ms, it could be ok to add another 45 ms of queuing to have a combined delay of 50 ms. Because the allowed queuing delay is quite high, throughput is maximized.
>
>
>
> On Sun, Jul 3, 2022 at 4:49 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>> Hey, good start to my saturday!
>>
>> 1) Apple's fq_"codel" implementation did not actually implement the
>> codel portion of the algorithm when I last checked last year. Doesn't
>> matter what you set the target to.
>>
>> 2) fq_codel has a detectable (IMHO, have not tried) phase where the
>> "sparse flow optimization" allows non queue building flows to bypass
>> the queue building
>> flows entirely. See attached. fq-pie, also. Cake also has this, but
>> with the addition of per host FQ.
>>
>> However to detect it, requires sending packets on an interval smaller
>> than the codel quantum. Most (all!?) TCP implementations, even the
>> paced ones, send 2 1514 packets back to back, so you get an ack back
>> on servicing either the first or second one. Sending individual TCP
>> packets paced, and bunching them up selectively should also oscillate
>> around the queue width. (width = number of queue building flows,
>> depth, the depth of the queue). The codel quantum defaults to 1514
>> bytes but is frequently autoscaled to less at low bandwidths.
>>
>> 3) It is also possible, (IMHO), to send a small secondary flow
>> isochronously as a "clock" and observe the width and depth of the
>> queue that way.
>>
>> 4) You can use a fq_codel RFC3168 compliant implementation to send
>> back a CE, which is (presently) a fairly reliable signal of fq_codel
>> on the path. A reduction in *pacing* different from what the RFC3168
>> behavior is (reduction by half), would be interesting.
>>
>> Thx for this today! A principal observation of the BBR paper was that
>> you cannot measure for latency and bandwidth *at the same time* in a
>> single and you showing, in a FQ'd environment, that you can, I don't
>> remember seeing elsewhere (but I'm sure someone will correct me).
>>
>> On Sun, Jul 3, 2022 at 7:16 AM Maximilian Bachl via Bloat
>> <bloat@lists.bufferbloat.net> wrote:
>> >
>> > Hi Sebastian,
>> >
>> > Thank you for your suggestions.
>> >
>> > Regarding
>> > a) I slightly modified the algorithm to make it work better with the small 5 ms threshold. I updated the paper on arXiv; it should be online by Tuesday morning Central European Time. Detection accuracy for Linux's fq_codel is quite high (high 90s) but it doesn't work that well with small bandwidths (<=10 Mbit/s).
>> > b) that's a good suggestion. I'm thinking how to do it best since also every experiment with every RTT/bandwidth was repeated and I'm not sure how to make a CDF that includes the RTTs/bandwidths and the repetitions.
>> > c) I guess for every experiment with pfifo, the resulting accuracy is a true negative rate, while for every experiment with fq* the resulting accuracy is a true positive rate. I updated the paper to include these terms to make it clearer. Summarizing, the true negative rate is 100%, the true positive rate for fq is >= 95% and for fq_codel it's also in that range except for low bandwidths.
>> >
>> > In case you're interested in reliable FQ detection but not in the combination of FQ detection and congestion control, I co-authored another paper which uses a different FQ detection method, which is more robust but has the disadvantage of causing packet loss (Detecting Fair Queuing for Better Congestion Control (https://arxiv.org/abs/2010.08362)).
>> >
>> > Regards,
>> > Max
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Bloat] Fair queuing detection for congestion control
  2022-10-12 17:35       ` Maximilian Bachl
  2022-10-12 18:16         ` Dave Taht
@ 2022-10-13 14:17         ` Dave Taht
  1 sibling, 0 replies; 7+ messages in thread
From: Dave Taht @ 2022-10-13 14:17 UTC (permalink / raw)
  To: Maximilian Bachl; +Cc: bloat

On Wed, Oct 12, 2022 at 10:35 AM Maximilian Bachl
<maximilian.bachl@gmail.com> wrote:
>
> Building upon the ideas and advice I received, I simplified the whole concept and updated the preprint (https://arxiv.org/abs/2206.10561). The new approach is somewhat similar to what you propose in point 3). True negative rate (correctly detecting the absence of FQ) is now >99%; True positive rate is >95% (correctly detecting the presence of FQ (fq_codel and fq)). It can also detect if the bottleneck link changes during a flow from FQ to non-FQ and vice versa.

That is really marvelous detection work, worth leveraging.

> A new concept is that each application can choose its maximum allowed delay independently if there's FQ. A cloud gaming application might choose to not allow more than 5 ms to keep latency minimal, while a video chat application might allow 25 ms to achieve higher throughput. Thus, each application can choose its own tradeoff between throughput and delay. Also, applications can measure how large the base delay is and, if the base delay is very low (because the other host is close by), they can allow more queuing delay. For example, if the base delay between two hosts is just 5 ms, it could be ok to add another 45 ms of queuing to have a combined delay of 50 ms. Because the allowed queuing delay is quite high, throughput is maximized.

As promising as this addition is to quic, I have to take umbrage with
the idea that "an application can pick the right amount of buffering."

First: The ideal amount of network buffering is... zero. Why would an
application want to have excess buffering? There isn't much of a
tradeoff between throughput and delay.

FQ nowadays (nearly) everywhere makes it possible for delay based
transports to "just work". Once FQ is found... an application can
quickly probe for the right rate and then just motor along at some
rate (well) below that. A VR or AR application, especially, becomes
immune to the jitter and latency induced by other flows on the link,
and mostly immune to the sudden bandwidth changes you can get from
wireless links.

 You can probe for more bandwidth periodically via a flow you don't care about.

There's a pretty big knee in the bandwidth curve for wifi, I'll admit
(aggregation is responsible for 60% or so of the bandwidth), but even
then you only need an extra 5ms... and if your application doesn't
need all that bandwidth,
it's better to target 0.

Secondly, the AQM in fq_codel and cake aim for a 5ms target. It's
presently a bit larger in the wifi implementations (20ms in the field,
8ms in testing), so if you aim for buffering larger than that, you
will get drops or marks from those algorithms starting at 100ms after
you consistently exceed the target. You can (and probably should) be
using lossless ECN marks instead, which (if you really want buffering
for some reason), will just send an ever increasing number of marks
back to the sender if they exceed the locally configured target, which
I guess is a useful signal, but at least it doesn't drop packets.

The circumstances where an application might want more than 5ms of
delay from a FQ'd network seem few. It's putting the cart before the
hearse.



https://www.linkedin.com/posts/maxiereynolds_capacityeurope-datacenters-subseaconnectivity-activity-6986319233676713984-LwY3?utm_source=share&utm_medium=member_desktop
>
>
> On Sun, Jul 3, 2022 at 4:49 PM Dave Taht <dave.taht@gmail.com> wrote:
>>
>> Hey, good start to my saturday!
>>
>> 1) Apple's fq_"codel" implementation did not actually implement the
>> codel portion of the algorithm when I last checked last year. Doesn't
>> matter what you set the target to.
>>
>> 2) fq_codel has a detectable (IMHO, have not tried) phase where the
>> "sparse flow optimization" allows non queue building flows to bypass
>> the queue building
>> flows entirely. See attached. fq-pie, also. Cake also has this, but
>> with the addition of per host FQ.
>>
>> However to detect it, requires sending packets on an interval smaller
>> than the codel quantum. Most (all!?) TCP implementations, even the
>> paced ones, send 2 1514 packets back to back, so you get an ack back
>> on servicing either the first or second one. Sending individual TCP
>> packets paced, and bunching them up selectively should also oscillate
>> around the queue width. (width = number of queue building flows,
>> depth, the depth of the queue). The codel quantum defaults to 1514
>> bytes but is frequently autoscaled to less at low bandwidths.
>>
>> 3) It is also possible, (IMHO), to send a small secondary flow
>> isochronously as a "clock" and observe the width and depth of the
>> queue that way.
>>
>> 4) You can use a fq_codel RFC3168 compliant implementation to send
>> back a CE, which is (presently) a fairly reliable signal of fq_codel
>> on the path. A reduction in *pacing* different from what the RFC3168
>> behavior is (reduction by half), would be interesting.
>>
>> Thx for this today! A principal observation of the BBR paper was that
>> you cannot measure for latency and bandwidth *at the same time* in a
>> single and you showing, in a FQ'd environment, that you can, I don't
>> remember seeing elsewhere (but I'm sure someone will correct me).
>>
>> On Sun, Jul 3, 2022 at 7:16 AM Maximilian Bachl via Bloat
>> <bloat@lists.bufferbloat.net> wrote:
>> >
>> > Hi Sebastian,
>> >
>> > Thank you for your suggestions.
>> >
>> > Regarding
>> > a) I slightly modified the algorithm to make it work better with the small 5 ms threshold. I updated the paper on arXiv; it should be online by Tuesday morning Central European Time. Detection accuracy for Linux's fq_codel is quite high (high 90s) but it doesn't work that well with small bandwidths (<=10 Mbit/s).
>> > b) that's a good suggestion. I'm thinking how to do it best since also every experiment with every RTT/bandwidth was repeated and I'm not sure how to make a CDF that includes the RTTs/bandwidths and the repetitions.
>> > c) I guess for every experiment with pfifo, the resulting accuracy is a true negative rate, while for every experiment with fq* the resulting accuracy is a true positive rate. I updated the paper to include these terms to make it clearer. Summarizing, the true negative rate is 100%, the true positive rate for fq is >= 95% and for fq_codel it's also in that range except for low bandwidths.
>> >
>> > In case you're interested in reliable FQ detection but not in the combination of FQ detection and congestion control, I co-authored another paper which uses a different FQ detection method, which is more robust but has the disadvantage of causing packet loss (Detecting Fair Queuing for Better Congestion Control (https://arxiv.org/abs/2010.08362)).
>> >
>> > Regards,
>> > Max
>> > _______________________________________________
>> > Bloat mailing list
>> > Bloat@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-10-13 14:17 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-27 16:23 [Bloat] Fair queuing detection for congestion control Maximilian Bachl
2022-07-01  9:37 ` Sebastian Moeller
2022-07-03 14:16   ` Maximilian Bachl
2022-07-03 14:49     ` Dave Taht
2022-10-12 17:35       ` Maximilian Bachl
2022-10-12 18:16         ` Dave Taht
2022-10-13 14:17         ` Dave Taht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox