CoDel AQM discussions
 help / color / mirror / Atom feed
* [Codel] Finite-Buffer M/G/1 Queues with Time and Space Priorities
@ 2022-07-27 15:34 Dave Taht
  2022-07-27 17:23 ` Luca Muscariello
       [not found] ` <ae7a9f3e-6f94-1953-20dd-3d2140a4e49d@kit.edu>
  0 siblings, 2 replies; 9+ messages in thread
From: Dave Taht @ 2022-07-27 15:34 UTC (permalink / raw)
  To: starlink, codel

Occasionally I pass along a recent paper that I don't understand in
the hope that someone can enlighten me.
This is one of those occasions, where I am trying to leverage what I
understand of existing FQ-codel behaviors against real traffic.

https://www.hindawi.com/journals/mpe/2022/4539940/

Compared to the previous study on finite-buffer M/M/1 priority queues
with time and space priority, where service times are identical and
exponentially distributed for both types of traffic, in our model we
assume that service times are different and are generally distributed
for different types of traffic. As a result, our model is more
suitable for the performance analysis of communication systems
accommodating multiple types of traffic with different service-time
distributions. For the proposed queueing model, we derive the
queue-length distributions, loss probabilities, and mean waiting times
of both types of traffic, as well as the push-out probability of
delay-sensitive traffic.

--
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Codel] Finite-Buffer M/G/1 Queues with Time and Space Priorities
  2022-07-27 15:34 [Codel] Finite-Buffer M/G/1 Queues with Time and Space Priorities Dave Taht
@ 2022-07-27 17:23 ` Luca Muscariello
       [not found] ` <ae7a9f3e-6f94-1953-20dd-3d2140a4e49d@kit.edu>
  1 sibling, 0 replies; 9+ messages in thread
From: Luca Muscariello @ 2022-07-27 17:23 UTC (permalink / raw)
  To: Dave Taht; +Cc: codel, starlink

[-- Attachment #1: Type: text/plain, Size: 1787 bytes --]

On Wed 27 Jul 2022 at 17:34, Dave Taht <dave.taht@gmail.com> wrote:

> Occasionally I pass along a recent paper that I don't understand in
> the hope that someone can enlighten me.
> This is one of those occasions, where I am trying to leverage what I
> understand of existing FQ-codel behaviors against real traffic.



I’m not sure how realistic the model is. The delay-sensitive class gets
non-preemptive service prioritization over the loss-sensitive class (
dequeue time) So far so good.
The loss-sensitive class can take advantage of the presence of
delay-sensitive packets in the queue at enqueue time if the buffer is full
by dropping delay sensitive traffic.

I don’t think it models anything useful for fq-codel.
It might be a model for loss-less networks  like fiber channel or things
like NVMe over  Fabric with Ethernet as fabric.


>
> https://www.hindawi.com/journals/mpe/2022/4539940/
>
> Compared to the previous study on finite-buffer M/M/1 priority queues
> with time and space priority, where service times are identical and
> exponentially distributed for both types of traffic, in our model we
> assume that service times are different and are generally distributed
> for different types of traffic. As a result, our model is more
> suitable for the performance analysis of communication systems
> accommodating multiple types of traffic with different service-time
> distributions. For the proposed queueing model, we derive the
> queue-length distributions, loss probabilities, and mean waiting times
> of both types of traffic, as well as the push-out probability of
> delay-sensitive traffic.
>
> --
> FQ World Domination pending:
> https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC
>

[-- Attachment #2: Type: text/html, Size: 2710 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Codel] [Starlink] Finite-Buffer M/G/1 Queues with Time and Space Priorities
       [not found]   ` <CAKf5G6L8_sKGtSQ-0RaiQdTkwd=7shjtKi9HKq0k+D+SXGu0=w@mail.gmail.com>
@ 2022-07-28  9:55     ` Sebastian Moeller
       [not found]       ` <CAKf5G6L86pXFj7s-gB1VfiF3gctDpbd7Biw6mKhLfxXgRTRSzA@mail.gmail.com>
  0 siblings, 1 reply; 9+ messages in thread
From: Sebastian Moeller @ 2022-07-28  9:55 UTC (permalink / raw)
  To: Bjørn Ivar Teigen; +Cc: Bless, Roland (TM), Dave Taht via Starlink, codel

Hi all,


> On Jul 28, 2022, at 11:26, Bjørn Ivar Teigen via Starlink <starlink@lists.bufferbloat.net> wrote:
> 
> Hi everyone,
> 
> Interesting paper Dave, I've got a few thoughts:
> 
> I like the split into delay-sensitive and loss-sensitive data.

However often real data is slightly different (e.g. not nicely either delay- or loss-sensitive)... e.g. for "real-time" games you have both delay and loss sensitivity (similarly for VoIP), however both can deal with occasional lost or delayed packets (if the delay is large enough to say be re-ordered with the temporally next data packet (voice sample in VoIP, server-tick update in games), that packet's data will likely not be evaluated at all). And large scale bulk downloads are both tolerant to delay and occasional loss. So if we think about a state space spanned by a delay and a loss-sensitivity axis, I predict most real traffic types will cluster somewhat around the diagonal (more or less closely).

About the rest of the paper I have nothing to contribute, since I did not spend the time to work though it.

Regards
	Sebastian



> Different applications can have different needs and this split allows a queuing algorithm to take those differences into account. Not the first time I've seen this kind of split, but the other one I've seen used M/M/1/k queues (document here: https://www.researchgate.net/publication/2452029_A_Queueing_Theory_Model_that_Enables_Control_of_Loss_and_Delay_of_Traffic_at_a_Network_Switch) 
> 
> That said, the performance metrics are derived from the embedded Markov chain of the queuing system. This means the metrics are averages over *all of time*, and thus there can be shorter periods (seconds, minutes, hours) of much worse than average performance. Therefore the conclusions of the paper should be taken with a grain of salt in my opinion.
> 
> On Thu, 28 Jul 2022 at 10:45, Bless, Roland (TM) via Starlink <starlink@lists.bufferbloat.net> wrote:
> Hi Dave,
> 
> IMHO the problem w.r.t the applicability of most models from
> queueing theory is that they only work for load < 1, whereas
> we are using the network with load values ~1 (i.e., around one) due to 
> congestion control feedback loops that drive the bottleneck link
> to saturation (unless you consider application limited traffic sources).
> 
> To be fair there are queuing theory models that include packet loss (which is the case for the paper Dave is asking about here), and these can work perfectly well for load > 1. Agree about the CC feedback loops affecting the results though. Even if the distributions are general in the paper, they still assume samples are IID which is not true for real networks. Feedback loops make real traffic self-correlated, which makes the short periods of worse than average performance worse and more frequent than IID models might suggest.
> 
> Regards,
> Bjørn Ivar
>  
> 
> Regards,
>   Roland
> 
> On 27.07.22 at 17:34 Dave Taht via Starlink wrote:
> > Occasionally I pass along a recent paper that I don't understand in
> > the hope that someone can enlighten me.
> > This is one of those occasions, where I am trying to leverage what I
> > understand of existing FQ-codel behaviors against real traffic.
> > 
> > https://www.hindawi.com/journals/mpe/2022/4539940/
> > 
> > Compared to the previous study on finite-buffer M/M/1 priority queues
> > with time and space priority, where service times are identical and
> > exponentially distributed for both types of traffic, in our model we
> > assume that service times are different and are generally distributed
> > for different types of traffic. As a result, our model is more
> > suitable for the performance analysis of communication systems
> > accommodating multiple types of traffic with different service-time
> > distributions. For the proposed queueing model, we derive the
> > queue-length distributions, loss probabilities, and mean waiting times
> > of both types of traffic, as well as the push-out probability of
> > delay-sensitive traffic.
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
> 
> 
> -- 
> Bjørn Ivar Teigen
> Head of Research
> +47 47335952 | bjorn@domos.no | www.domos.no
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Codel] [Starlink] Finite-Buffer M/G/1 Queues with Time and Space Priorities
       [not found]       ` <CAKf5G6L86pXFj7s-gB1VfiF3gctDpbd7Biw6mKhLfxXgRTRSzA@mail.gmail.com>
@ 2022-07-28 13:50         ` Dave Taht
  2022-07-29 14:55           ` Dave Taht
  0 siblings, 1 reply; 9+ messages in thread
From: Dave Taht @ 2022-07-28 13:50 UTC (permalink / raw)
  To: Bjørn Ivar Teigen; +Cc: Sebastian Moeller, Dave Taht via Starlink, codel

thx for the comments everyone!

On Thu, Jul 28, 2022 at 3:16 AM Bjørn Ivar Teigen via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> Very good point. Perhaps we can think of it as "at what point does delay equal loss?". As you say, too much latency (causing reordering for instance, or triggering an algorithm to smooth over missing data), is functionally equivalent to loss, and therefore buffering beyond that point is making things worse by delaying more traffic. The point at which this kicks in varies a lot between applications though, so some kind of classification might still make sense.
>
> In a way, I think FQ_Codel does this classification implicitly by treating sparse and non-sparse flows differently.

the implicit flow analysis of fq_codel paper toke did is here:
http://www.diva-portal.org/smash/get/diva2:1251687/FULLTEXT01.pdf
It's a really nice feature!, and helps a lot when also applied to wifi
station scheduling.

I have sometimes thought that increasing to quantum to account for two
paced packets in a row (at high rates) was a good idea,
other times having paced transports analyze the "beat frequency" of
sending packets through fq_codel vs a vs the ack flow characteristics
(for example, filtering) might be useful.

Imagine that instead of sending packets on a fixed but increasing
pacing schedule within an RTT thusly

PPPPPPPPPP # IW10 burst
PP      PP      PP     PP    PP # often about 24 packets in what we
think the RTT is

PP  PP  PP  PP PP PP PP

PPPPPPPPPPPPPPPPPP

PPPPPPPPPPPPPPPPPPPPPPP stready buffering and ultimately a drop (and
yes this is inaccurate a model in a zillion ways, forgive me for
purposes of extrapolation in ascii text)

If instead...

You broke up the pacing within an RTT on an actual curve, selecting
some random segment out of PI as your actual starting point, say, at
3.14596 here.

PPPPPP PPPPP PPP
PPPPP PPPPPPPP
PPPPPPPPP PPP PP

3.14159265358979323846264338327950288419716939937510
  58209749445923078164062862089986280348253421170679
  82148086513282306647093844609550582231725359408128
  48111745028410270193852110555964462294895493038196
  44288109756659334461284756482337867831652712019091
  45648566923460348610454326648213393607260249141273
  72458700660631558817488152092096282925409171536436
  78925903600113305305488204665213841469519415116094
  33057270365759591953092186117381932611793105118548
  07446237996274956735188575272489122793818301194912
  98336733624406566430860213949463952247371907021798
  60943702770539217176293176752384674818467669405132
  00056812714526356082778577134275778960917363717872
  14684409012249534301465495853710507922796892589235
  42019956112129021960864034418159813629774771309960
  51870721134999999837297804995105973173281609631859
  50244594553469083026425223082533446850352619311881
  71010003137838752886587533208381420617177669147303
  59825349042875546873115956286388235378759375195778
  18577805321712268066130019278766111959092164201989

what could you learn?


> - Bjørn Ivar
>
> On Thu, 28 Jul 2022 at 11:55, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> Hi all,
>>
>>
>> > On Jul 28, 2022, at 11:26, Bjørn Ivar Teigen via Starlink <starlink@lists.bufferbloat.net> wrote:
>> >
>> > Hi everyone,
>> >
>> > Interesting paper Dave, I've got a few thoughts:
>> >
>> > I like the split into delay-sensitive and loss-sensitive data.
>>
>> However often real data is slightly different (e.g. not nicely either delay- or loss-sensitive)... e.g. for "real-time" games you have both delay and loss sensitivity (similarly for VoIP), however both can deal with occasional lost or delayed packets (if the delay is large enough to say be re-ordered with the temporally next data packet (voice sample in VoIP, server-tick update in games), that packet's data will likely not be evaluated at all). And large scale bulk downloads are both tolerant to delay and occasional loss. So if we think about a state space spanned by a delay and a loss-sensitivity axis, I predict most real traffic types will cluster somewhat around the diagonal (more or less closely).
>>
>> About the rest of the paper I have nothing to contribute, since I did not spend the time to work though it.
>>
>> Regards
>>         Sebastian
>>
>>
>>
>> > Different applications can have different needs and this split allows a queuing algorithm to take those differences into account. Not the first time I've seen this kind of split, but the other one I've seen used M/M/1/k queues (document here: https://www.researchgate.net/publication/2452029_A_Queueing_Theory_Model_that_Enables_Control_of_Loss_and_Delay_of_Traffic_at_a_Network_Switch)
>> >
>> > That said, the performance metrics are derived from the embedded Markov chain of the queuing system. This means the metrics are averages over *all of time*, and thus there can be shorter periods (seconds, minutes, hours) of much worse than average performance. Therefore the conclusions of the paper should be taken with a grain of salt in my opinion.
>> >
>> > On Thu, 28 Jul 2022 at 10:45, Bless, Roland (TM) via Starlink <starlink@lists.bufferbloat.net> wrote:
>> > Hi Dave,
>> >
>> > IMHO the problem w.r.t the applicability of most models from
>> > queueing theory is that they only work for load < 1, whereas
>> > we are using the network with load values ~1 (i.e., around one) due to
>> > congestion control feedback loops that drive the bottleneck link
>> > to saturation (unless you consider application limited traffic sources).
>> >
>> > To be fair there are queuing theory models that include packet loss (which is the case for the paper Dave is asking about here), and these can work perfectly well for load > 1. Agree about the CC feedback loops affecting the results though. Even if the distributions are general in the paper, they still assume samples are IID which is not true for real networks. Feedback loops make real traffic self-correlated, which makes the short periods of worse than average performance worse and more frequent than IID models might suggest.
>> >
>> > Regards,
>> > Bjørn Ivar
>> >
>> >
>> > Regards,
>> >   Roland
>> >
>> > On 27.07.22 at 17:34 Dave Taht via Starlink wrote:
>> > > Occasionally I pass along a recent paper that I don't understand in
>> > > the hope that someone can enlighten me.
>> > > This is one of those occasions, where I am trying to leverage what I
>> > > understand of existing FQ-codel behaviors against real traffic.
>> > >
>> > > https://www.hindawi.com/journals/mpe/2022/4539940/
>> > >
>> > > Compared to the previous study on finite-buffer M/M/1 priority queues
>> > > with time and space priority, where service times are identical and
>> > > exponentially distributed for both types of traffic, in our model we
>> > > assume that service times are different and are generally distributed
>> > > for different types of traffic. As a result, our model is more
>> > > suitable for the performance analysis of communication systems
>> > > accommodating multiple types of traffic with different service-time
>> > > distributions. For the proposed queueing model, we derive the
>> > > queue-length distributions, loss probabilities, and mean waiting times
>> > > of both types of traffic, as well as the push-out probability of
>> > > delay-sensitive traffic.
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>> >
>> >
>> > --
>> > Bjørn Ivar Teigen
>> > Head of Research
>> > +47 47335952 | bjorn@domos.no | www.domos.no
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>
>
> --
> Bjørn Ivar Teigen
> Head of Research
> +47 47335952 | bjorn@domos.no | www.domos.no
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink



-- 
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Codel] [Starlink] Finite-Buffer M/G/1 Queues with Time and Space Priorities
  2022-07-28 13:50         ` Dave Taht
@ 2022-07-29 14:55           ` Dave Taht
  2022-07-29 15:29             ` Sebastian Moeller
  2022-07-29 15:29             ` Sebastian Moeller
  0 siblings, 2 replies; 9+ messages in thread
From: Dave Taht @ 2022-07-29 14:55 UTC (permalink / raw)
  To: Bjørn Ivar Teigen; +Cc: Sebastian Moeller, Dave Taht via Starlink, codel

To give credit where credit is due, "packet chirping" had been
explored before in the context
of the l4s early marking ecn effort:

https://www.bobbriscoe.net/presents/1802paced-chirps/1802paced-chirps.pdf

It died here: https://bobbriscoe.net/projects/netsvc_i-f/chirp_pfldnet10.pdf
For now.

On Thu, Jul 28, 2022 at 6:50 AM Dave Taht <dave.taht@gmail.com> wrote:
>
> thx for the comments everyone!
>
> On Thu, Jul 28, 2022 at 3:16 AM Bjørn Ivar Teigen via Starlink
> <starlink@lists.bufferbloat.net> wrote:
> >
> > Very good point. Perhaps we can think of it as "at what point does delay equal loss?". As you say, too much latency (causing reordering for instance, or triggering an algorithm to smooth over missing data), is functionally equivalent to loss, and therefore buffering beyond that point is making things worse by delaying more traffic. The point at which this kicks in varies a lot between applications though, so some kind of classification might still make sense.
> >
> > In a way, I think FQ_Codel does this classification implicitly by treating sparse and non-sparse flows differently.
>
> the implicit flow analysis of fq_codel paper toke did is here:
> http://www.diva-portal.org/smash/get/diva2:1251687/FULLTEXT01.pdf
> It's a really nice feature!, and helps a lot when also applied to wifi
> station scheduling.
>
> I have sometimes thought that increasing to quantum to account for two
> paced packets in a row (at high rates) was a good idea,
> other times having paced transports analyze the "beat frequency" of
> sending packets through fq_codel vs a vs the ack flow characteristics
> (for example, filtering) might be useful.
>
> Imagine that instead of sending packets on a fixed but increasing
> pacing schedule within an RTT thusly
>
> PPPPPPPPPP # IW10 burst
> PP      PP      PP     PP    PP # often about 24 packets in what we
> think the RTT is
>
> PP  PP  PP  PP PP PP PP
>
> PPPPPPPPPPPPPPPPPP
>
> PPPPPPPPPPPPPPPPPPPPPPP stready buffering and ultimately a drop (and
> yes this is inaccurate a model in a zillion ways, forgive me for
> purposes of extrapolation in ascii text)
>
> If instead...
>
> You broke up the pacing within an RTT on an actual curve, selecting
> some random segment out of PI as your actual starting point, say, at
> 3.14596 here.
>
> PPPPPP PPPPP PPP
> PPPPP PPPPPPPP
> PPPPPPPPP PPP PP
>
> 3.14159265358979323846264338327950288419716939937510
>   58209749445923078164062862089986280348253421170679
>   82148086513282306647093844609550582231725359408128
>   48111745028410270193852110555964462294895493038196
>   44288109756659334461284756482337867831652712019091
>   45648566923460348610454326648213393607260249141273
>   72458700660631558817488152092096282925409171536436
>   78925903600113305305488204665213841469519415116094
>   33057270365759591953092186117381932611793105118548
>   07446237996274956735188575272489122793818301194912
>   98336733624406566430860213949463952247371907021798
>   60943702770539217176293176752384674818467669405132
>   00056812714526356082778577134275778960917363717872
>   14684409012249534301465495853710507922796892589235
>   42019956112129021960864034418159813629774771309960
>   51870721134999999837297804995105973173281609631859
>   50244594553469083026425223082533446850352619311881
>   71010003137838752886587533208381420617177669147303
>   59825349042875546873115956286388235378759375195778
>   18577805321712268066130019278766111959092164201989
>
> what could you learn?
>
>
> > - Bjørn Ivar
> >
> > On Thu, 28 Jul 2022 at 11:55, Sebastian Moeller <moeller0@gmx.de> wrote:
> >>
> >> Hi all,
> >>
> >>
> >> > On Jul 28, 2022, at 11:26, Bjørn Ivar Teigen via Starlink <starlink@lists.bufferbloat.net> wrote:
> >> >
> >> > Hi everyone,
> >> >
> >> > Interesting paper Dave, I've got a few thoughts:
> >> >
> >> > I like the split into delay-sensitive and loss-sensitive data.
> >>
> >> However often real data is slightly different (e.g. not nicely either delay- or loss-sensitive)... e.g. for "real-time" games you have both delay and loss sensitivity (similarly for VoIP), however both can deal with occasional lost or delayed packets (if the delay is large enough to say be re-ordered with the temporally next data packet (voice sample in VoIP, server-tick update in games), that packet's data will likely not be evaluated at all). And large scale bulk downloads are both tolerant to delay and occasional loss. So if we think about a state space spanned by a delay and a loss-sensitivity axis, I predict most real traffic types will cluster somewhat around the diagonal (more or less closely).
> >>
> >> About the rest of the paper I have nothing to contribute, since I did not spend the time to work though it.
> >>
> >> Regards
> >>         Sebastian
> >>
> >>
> >>
> >> > Different applications can have different needs and this split allows a queuing algorithm to take those differences into account. Not the first time I've seen this kind of split, but the other one I've seen used M/M/1/k queues (document here: https://www.researchgate.net/publication/2452029_A_Queueing_Theory_Model_that_Enables_Control_of_Loss_and_Delay_of_Traffic_at_a_Network_Switch)
> >> >
> >> > That said, the performance metrics are derived from the embedded Markov chain of the queuing system. This means the metrics are averages over *all of time*, and thus there can be shorter periods (seconds, minutes, hours) of much worse than average performance. Therefore the conclusions of the paper should be taken with a grain of salt in my opinion.
> >> >
> >> > On Thu, 28 Jul 2022 at 10:45, Bless, Roland (TM) via Starlink <starlink@lists.bufferbloat.net> wrote:
> >> > Hi Dave,
> >> >
> >> > IMHO the problem w.r.t the applicability of most models from
> >> > queueing theory is that they only work for load < 1, whereas
> >> > we are using the network with load values ~1 (i.e., around one) due to
> >> > congestion control feedback loops that drive the bottleneck link
> >> > to saturation (unless you consider application limited traffic sources).
> >> >
> >> > To be fair there are queuing theory models that include packet loss (which is the case for the paper Dave is asking about here), and these can work perfectly well for load > 1. Agree about the CC feedback loops affecting the results though. Even if the distributions are general in the paper, they still assume samples are IID which is not true for real networks. Feedback loops make real traffic self-correlated, which makes the short periods of worse than average performance worse and more frequent than IID models might suggest.
> >> >
> >> > Regards,
> >> > Bjørn Ivar
> >> >
> >> >
> >> > Regards,
> >> >   Roland
> >> >
> >> > On 27.07.22 at 17:34 Dave Taht via Starlink wrote:
> >> > > Occasionally I pass along a recent paper that I don't understand in
> >> > > the hope that someone can enlighten me.
> >> > > This is one of those occasions, where I am trying to leverage what I
> >> > > understand of existing FQ-codel behaviors against real traffic.
> >> > >
> >> > > https://www.hindawi.com/journals/mpe/2022/4539940/
> >> > >
> >> > > Compared to the previous study on finite-buffer M/M/1 priority queues
> >> > > with time and space priority, where service times are identical and
> >> > > exponentially distributed for both types of traffic, in our model we
> >> > > assume that service times are different and are generally distributed
> >> > > for different types of traffic. As a result, our model is more
> >> > > suitable for the performance analysis of communication systems
> >> > > accommodating multiple types of traffic with different service-time
> >> > > distributions. For the proposed queueing model, we derive the
> >> > > queue-length distributions, loss probabilities, and mean waiting times
> >> > > of both types of traffic, as well as the push-out probability of
> >> > > delay-sensitive traffic.
> >> > _______________________________________________
> >> > Starlink mailing list
> >> > Starlink@lists.bufferbloat.net
> >> > https://lists.bufferbloat.net/listinfo/starlink
> >> >
> >> >
> >> > --
> >> > Bjørn Ivar Teigen
> >> > Head of Research
> >> > +47 47335952 | bjorn@domos.no | www.domos.no
> >> > _______________________________________________
> >> > Starlink mailing list
> >> > Starlink@lists.bufferbloat.net
> >> > https://lists.bufferbloat.net/listinfo/starlink
> >>
> >
> >
> > --
> > Bjørn Ivar Teigen
> > Head of Research
> > +47 47335952 | bjorn@domos.no | www.domos.no
> > _______________________________________________
> > Starlink mailing list
> > Starlink@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/starlink
>
>
>
> --
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC



-- 
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Codel] [Starlink] Finite-Buffer M/G/1 Queues with Time and Space Priorities
  2022-07-29 14:55           ` Dave Taht
@ 2022-07-29 15:29             ` Sebastian Moeller
  2022-07-29 15:29             ` Sebastian Moeller
  1 sibling, 0 replies; 9+ messages in thread
From: Sebastian Moeller @ 2022-07-29 15:29 UTC (permalink / raw)
  To: Dave Taht, Bjørn Ivar Teigen; +Cc: Dave Taht via Starlink, codel

[-- Attachment #1: Type: text/plain, Size: 9922 bytes --]

Hi Dave,

I thought it was accepted knowledge that inter-packet delays across the internet are not reliable? Why are paced chirps immune from that problem? Or asked differently after all noise filtering required for robust and reliable operation over the existing internet, is this still going to be noticeably faster than current slow-start? In spite of the slow in the name doubling every RTT is IMHO already a pretty aggressive growth function....

Sidenote, you really think the second paper nailed PCs coffin shut?

On 29 July 2022 16:55:42 CEST, Dave Taht <dave.taht@gmail.com> wrote:
>To give credit where credit is due, "packet chirping" had been
>explored before in the context
>of the l4s early marking ecn effort:
>
>https://www.bobbriscoe.net/presents/1802paced-chirps/1802paced-chirps.pdf
>
>It died here: https://bobbriscoe.net/projects/netsvc_i-f/chirp_pfldnet10.pdf
>For now.
>
>On Thu, Jul 28, 2022 at 6:50 AM Dave Taht <dave.taht@gmail.com> wrote:
>>
>> thx for the comments everyone!
>>
>> On Thu, Jul 28, 2022 at 3:16 AM Bjørn Ivar Teigen via Starlink
>> <starlink@lists.bufferbloat.net> wrote:
>> >
>> > Very good point. Perhaps we can think of it as "at what point does delay equal loss?". As you say, too much latency (causing reordering for instance, or triggering an algorithm to smooth over missing data), is functionally equivalent to loss, and therefore buffering beyond that point is making things worse by delaying more traffic. The point at which this kicks in varies a lot between applications though, so some kind of classification might still make sense.
>> >
>> > In a way, I think FQ_Codel does this classification implicitly by treating sparse and non-sparse flows differently.
>>
>> the implicit flow analysis of fq_codel paper toke did is here:
>> http://www.diva-portal.org/smash/get/diva2:1251687/FULLTEXT01.pdf
>> It's a really nice feature!, and helps a lot when also applied to wifi
>> station scheduling.
>>
>> I have sometimes thought that increasing to quantum to account for two
>> paced packets in a row (at high rates) was a good idea,
>> other times having paced transports analyze the "beat frequency" of
>> sending packets through fq_codel vs a vs the ack flow characteristics
>> (for example, filtering) might be useful.
>>
>> Imagine that instead of sending packets on a fixed but increasing
>> pacing schedule within an RTT thusly
>>
>> PPPPPPPPPP # IW10 burst
>> PP      PP      PP     PP    PP # often about 24 packets in what we
>> think the RTT is
>>
>> PP  PP  PP  PP PP PP PP
>>
>> PPPPPPPPPPPPPPPPPP
>>
>> PPPPPPPPPPPPPPPPPPPPPPP stready buffering and ultimately a drop (and
>> yes this is inaccurate a model in a zillion ways, forgive me for
>> purposes of extrapolation in ascii text)
>>
>> If instead...
>>
>> You broke up the pacing within an RTT on an actual curve, selecting
>> some random segment out of PI as your actual starting point, say, at
>> 3.14596 here.
>>
>> PPPPPP PPPPP PPP
>> PPPPP PPPPPPPP
>> PPPPPPPPP PPP PP
>>
>> 3.14159265358979323846264338327950288419716939937510
>>   58209749445923078164062862089986280348253421170679
>>   82148086513282306647093844609550582231725359408128
>>   48111745028410270193852110555964462294895493038196
>>   44288109756659334461284756482337867831652712019091
>>   45648566923460348610454326648213393607260249141273
>>   72458700660631558817488152092096282925409171536436
>>   78925903600113305305488204665213841469519415116094
>>   33057270365759591953092186117381932611793105118548
>>   07446237996274956735188575272489122793818301194912
>>   98336733624406566430860213949463952247371907021798
>>   60943702770539217176293176752384674818467669405132
>>   00056812714526356082778577134275778960917363717872
>>   14684409012249534301465495853710507922796892589235
>>   42019956112129021960864034418159813629774771309960
>>   51870721134999999837297804995105973173281609631859
>>   50244594553469083026425223082533446850352619311881
>>   71010003137838752886587533208381420617177669147303
>>   59825349042875546873115956286388235378759375195778
>>   18577805321712268066130019278766111959092164201989
>>
>> what could you learn?
>>
>>
>> > - Bjørn Ivar
>> >
>> > On Thu, 28 Jul 2022 at 11:55, Sebastian Moeller <moeller0@gmx.de> wrote:
>> >>
>> >> Hi all,
>> >>
>> >>
>> >> > On Jul 28, 2022, at 11:26, Bjørn Ivar Teigen via Starlink <starlink@lists.bufferbloat.net> wrote:
>> >> >
>> >> > Hi everyone,
>> >> >
>> >> > Interesting paper Dave, I've got a few thoughts:
>> >> >
>> >> > I like the split into delay-sensitive and loss-sensitive data.
>> >>
>> >> However often real data is slightly different (e.g. not nicely either delay- or loss-sensitive)... e.g. for "real-time" games you have both delay and loss sensitivity (similarly for VoIP), however both can deal with occasional lost or delayed packets (if the delay is large enough to say be re-ordered with the temporally next data packet (voice sample in VoIP, server-tick update in games), that packet's data will likely not be evaluated at all). And large scale bulk downloads are both tolerant to delay and occasional loss. So if we think about a state space spanned by a delay and a loss-sensitivity axis, I predict most real traffic types will cluster somewhat around the diagonal (more or less closely).
>> >>
>> >> About the rest of the paper I have nothing to contribute, since I did not spend the time to work though it.
>> >>
>> >> Regards
>> >>         Sebastian
>> >>
>> >>
>> >>
>> >> > Different applications can have different needs and this split allows a queuing algorithm to take those differences into account. Not the first time I've seen this kind of split, but the other one I've seen used M/M/1/k queues (document here: https://www.researchgate.net/publication/2452029_A_Queueing_Theory_Model_that_Enables_Control_of_Loss_and_Delay_of_Traffic_at_a_Network_Switch)
>> >> >
>> >> > That said, the performance metrics are derived from the embedded Markov chain of the queuing system. This means the metrics are averages over *all of time*, and thus there can be shorter periods (seconds, minutes, hours) of much worse than average performance. Therefore the conclusions of the paper should be taken with a grain of salt in my opinion.
>> >> >
>> >> > On Thu, 28 Jul 2022 at 10:45, Bless, Roland (TM) via Starlink <starlink@lists.bufferbloat.net> wrote:
>> >> > Hi Dave,
>> >> >
>> >> > IMHO the problem w.r.t the applicability of most models from
>> >> > queueing theory is that they only work for load < 1, whereas
>> >> > we are using the network with load values ~1 (i.e., around one) due to
>> >> > congestion control feedback loops that drive the bottleneck link
>> >> > to saturation (unless you consider application limited traffic sources).
>> >> >
>> >> > To be fair there are queuing theory models that include packet loss (which is the case for the paper Dave is asking about here), and these can work perfectly well for load > 1. Agree about the CC feedback loops affecting the results though. Even if the distributions are general in the paper, they still assume samples are IID which is not true for real networks. Feedback loops make real traffic self-correlated, which makes the short periods of worse than average performance worse and more frequent than IID models might suggest.
>> >> >
>> >> > Regards,
>> >> > Bjørn Ivar
>> >> >
>> >> >
>> >> > Regards,
>> >> >   Roland
>> >> >
>> >> > On 27.07.22 at 17:34 Dave Taht via Starlink wrote:
>> >> > > Occasionally I pass along a recent paper that I don't understand in
>> >> > > the hope that someone can enlighten me.
>> >> > > This is one of those occasions, where I am trying to leverage what I
>> >> > > understand of existing FQ-codel behaviors against real traffic.
>> >> > >
>> >> > > https://www.hindawi.com/journals/mpe/2022/4539940/
>> >> > >
>> >> > > Compared to the previous study on finite-buffer M/M/1 priority queues
>> >> > > with time and space priority, where service times are identical and
>> >> > > exponentially distributed for both types of traffic, in our model we
>> >> > > assume that service times are different and are generally distributed
>> >> > > for different types of traffic. As a result, our model is more
>> >> > > suitable for the performance analysis of communication systems
>> >> > > accommodating multiple types of traffic with different service-time
>> >> > > distributions. For the proposed queueing model, we derive the
>> >> > > queue-length distributions, loss probabilities, and mean waiting times
>> >> > > of both types of traffic, as well as the push-out probability of
>> >> > > delay-sensitive traffic.
>> >> > _______________________________________________
>> >> > Starlink mailing list
>> >> > Starlink@lists.bufferbloat.net
>> >> > https://lists.bufferbloat.net/listinfo/starlink
>> >> >
>> >> >
>> >> > --
>> >> > Bjørn Ivar Teigen
>> >> > Head of Research
>> >> > +47 47335952 | bjorn@domos.no | www.domos.no
>> >> > _______________________________________________
>> >> > Starlink mailing list
>> >> > Starlink@lists.bufferbloat.net
>> >> > https://lists.bufferbloat.net/listinfo/starlink
>> >>
>> >
>> >
>> > --
>> > Bjørn Ivar Teigen
>> > Head of Research
>> > +47 47335952 | bjorn@domos.no | www.domos.no
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC
>
>
>
>-- 
>FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>Dave Täht CEO, TekLibre, LLC

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

[-- Attachment #2: Type: text/html, Size: 11626 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Codel] [Starlink] Finite-Buffer M/G/1 Queues with Time and Space Priorities
  2022-07-29 14:55           ` Dave Taht
  2022-07-29 15:29             ` Sebastian Moeller
@ 2022-07-29 15:29             ` Sebastian Moeller
  2022-07-29 17:08               ` Dave Taht
  1 sibling, 1 reply; 9+ messages in thread
From: Sebastian Moeller @ 2022-07-29 15:29 UTC (permalink / raw)
  To: Dave Taht, Bjørn Ivar Teigen; +Cc: Dave Taht via Starlink, codel

[-- Attachment #1: Type: text/plain, Size: 9922 bytes --]

Hi Dave,

I thought it was accepted knowledge that inter-packet delays across the internet are not reliable? Why are paced chirps immune from that problem? Or asked differently after all noise filtering required for robust and reliable operation over the existing internet, is this still going to be noticeably faster than current slow-start? In spite of the slow in the name doubling every RTT is IMHO already a pretty aggressive growth function....

Sidenote, you really think the second paper nailed PCs coffin shut?

On 29 July 2022 16:55:42 CEST, Dave Taht <dave.taht@gmail.com> wrote:
>To give credit where credit is due, "packet chirping" had been
>explored before in the context
>of the l4s early marking ecn effort:
>
>https://www.bobbriscoe.net/presents/1802paced-chirps/1802paced-chirps.pdf
>
>It died here: https://bobbriscoe.net/projects/netsvc_i-f/chirp_pfldnet10.pdf
>For now.
>
>On Thu, Jul 28, 2022 at 6:50 AM Dave Taht <dave.taht@gmail.com> wrote:
>>
>> thx for the comments everyone!
>>
>> On Thu, Jul 28, 2022 at 3:16 AM Bjørn Ivar Teigen via Starlink
>> <starlink@lists.bufferbloat.net> wrote:
>> >
>> > Very good point. Perhaps we can think of it as "at what point does delay equal loss?". As you say, too much latency (causing reordering for instance, or triggering an algorithm to smooth over missing data), is functionally equivalent to loss, and therefore buffering beyond that point is making things worse by delaying more traffic. The point at which this kicks in varies a lot between applications though, so some kind of classification might still make sense.
>> >
>> > In a way, I think FQ_Codel does this classification implicitly by treating sparse and non-sparse flows differently.
>>
>> the implicit flow analysis of fq_codel paper toke did is here:
>> http://www.diva-portal.org/smash/get/diva2:1251687/FULLTEXT01.pdf
>> It's a really nice feature!, and helps a lot when also applied to wifi
>> station scheduling.
>>
>> I have sometimes thought that increasing to quantum to account for two
>> paced packets in a row (at high rates) was a good idea,
>> other times having paced transports analyze the "beat frequency" of
>> sending packets through fq_codel vs a vs the ack flow characteristics
>> (for example, filtering) might be useful.
>>
>> Imagine that instead of sending packets on a fixed but increasing
>> pacing schedule within an RTT thusly
>>
>> PPPPPPPPPP # IW10 burst
>> PP      PP      PP     PP    PP # often about 24 packets in what we
>> think the RTT is
>>
>> PP  PP  PP  PP PP PP PP
>>
>> PPPPPPPPPPPPPPPPPP
>>
>> PPPPPPPPPPPPPPPPPPPPPPP stready buffering and ultimately a drop (and
>> yes this is inaccurate a model in a zillion ways, forgive me for
>> purposes of extrapolation in ascii text)
>>
>> If instead...
>>
>> You broke up the pacing within an RTT on an actual curve, selecting
>> some random segment out of PI as your actual starting point, say, at
>> 3.14596 here.
>>
>> PPPPPP PPPPP PPP
>> PPPPP PPPPPPPP
>> PPPPPPPPP PPP PP
>>
>> 3.14159265358979323846264338327950288419716939937510
>>   58209749445923078164062862089986280348253421170679
>>   82148086513282306647093844609550582231725359408128
>>   48111745028410270193852110555964462294895493038196
>>   44288109756659334461284756482337867831652712019091
>>   45648566923460348610454326648213393607260249141273
>>   72458700660631558817488152092096282925409171536436
>>   78925903600113305305488204665213841469519415116094
>>   33057270365759591953092186117381932611793105118548
>>   07446237996274956735188575272489122793818301194912
>>   98336733624406566430860213949463952247371907021798
>>   60943702770539217176293176752384674818467669405132
>>   00056812714526356082778577134275778960917363717872
>>   14684409012249534301465495853710507922796892589235
>>   42019956112129021960864034418159813629774771309960
>>   51870721134999999837297804995105973173281609631859
>>   50244594553469083026425223082533446850352619311881
>>   71010003137838752886587533208381420617177669147303
>>   59825349042875546873115956286388235378759375195778
>>   18577805321712268066130019278766111959092164201989
>>
>> what could you learn?
>>
>>
>> > - Bjørn Ivar
>> >
>> > On Thu, 28 Jul 2022 at 11:55, Sebastian Moeller <moeller0@gmx.de> wrote:
>> >>
>> >> Hi all,
>> >>
>> >>
>> >> > On Jul 28, 2022, at 11:26, Bjørn Ivar Teigen via Starlink <starlink@lists.bufferbloat.net> wrote:
>> >> >
>> >> > Hi everyone,
>> >> >
>> >> > Interesting paper Dave, I've got a few thoughts:
>> >> >
>> >> > I like the split into delay-sensitive and loss-sensitive data.
>> >>
>> >> However often real data is slightly different (e.g. not nicely either delay- or loss-sensitive)... e.g. for "real-time" games you have both delay and loss sensitivity (similarly for VoIP), however both can deal with occasional lost or delayed packets (if the delay is large enough to say be re-ordered with the temporally next data packet (voice sample in VoIP, server-tick update in games), that packet's data will likely not be evaluated at all). And large scale bulk downloads are both tolerant to delay and occasional loss. So if we think about a state space spanned by a delay and a loss-sensitivity axis, I predict most real traffic types will cluster somewhat around the diagonal (more or less closely).
>> >>
>> >> About the rest of the paper I have nothing to contribute, since I did not spend the time to work though it.
>> >>
>> >> Regards
>> >>         Sebastian
>> >>
>> >>
>> >>
>> >> > Different applications can have different needs and this split allows a queuing algorithm to take those differences into account. Not the first time I've seen this kind of split, but the other one I've seen used M/M/1/k queues (document here: https://www.researchgate.net/publication/2452029_A_Queueing_Theory_Model_that_Enables_Control_of_Loss_and_Delay_of_Traffic_at_a_Network_Switch)
>> >> >
>> >> > That said, the performance metrics are derived from the embedded Markov chain of the queuing system. This means the metrics are averages over *all of time*, and thus there can be shorter periods (seconds, minutes, hours) of much worse than average performance. Therefore the conclusions of the paper should be taken with a grain of salt in my opinion.
>> >> >
>> >> > On Thu, 28 Jul 2022 at 10:45, Bless, Roland (TM) via Starlink <starlink@lists.bufferbloat.net> wrote:
>> >> > Hi Dave,
>> >> >
>> >> > IMHO the problem w.r.t the applicability of most models from
>> >> > queueing theory is that they only work for load < 1, whereas
>> >> > we are using the network with load values ~1 (i.e., around one) due to
>> >> > congestion control feedback loops that drive the bottleneck link
>> >> > to saturation (unless you consider application limited traffic sources).
>> >> >
>> >> > To be fair there are queuing theory models that include packet loss (which is the case for the paper Dave is asking about here), and these can work perfectly well for load > 1. Agree about the CC feedback loops affecting the results though. Even if the distributions are general in the paper, they still assume samples are IID which is not true for real networks. Feedback loops make real traffic self-correlated, which makes the short periods of worse than average performance worse and more frequent than IID models might suggest.
>> >> >
>> >> > Regards,
>> >> > Bjørn Ivar
>> >> >
>> >> >
>> >> > Regards,
>> >> >   Roland
>> >> >
>> >> > On 27.07.22 at 17:34 Dave Taht via Starlink wrote:
>> >> > > Occasionally I pass along a recent paper that I don't understand in
>> >> > > the hope that someone can enlighten me.
>> >> > > This is one of those occasions, where I am trying to leverage what I
>> >> > > understand of existing FQ-codel behaviors against real traffic.
>> >> > >
>> >> > > https://www.hindawi.com/journals/mpe/2022/4539940/
>> >> > >
>> >> > > Compared to the previous study on finite-buffer M/M/1 priority queues
>> >> > > with time and space priority, where service times are identical and
>> >> > > exponentially distributed for both types of traffic, in our model we
>> >> > > assume that service times are different and are generally distributed
>> >> > > for different types of traffic. As a result, our model is more
>> >> > > suitable for the performance analysis of communication systems
>> >> > > accommodating multiple types of traffic with different service-time
>> >> > > distributions. For the proposed queueing model, we derive the
>> >> > > queue-length distributions, loss probabilities, and mean waiting times
>> >> > > of both types of traffic, as well as the push-out probability of
>> >> > > delay-sensitive traffic.
>> >> > _______________________________________________
>> >> > Starlink mailing list
>> >> > Starlink@lists.bufferbloat.net
>> >> > https://lists.bufferbloat.net/listinfo/starlink
>> >> >
>> >> >
>> >> > --
>> >> > Bjørn Ivar Teigen
>> >> > Head of Research
>> >> > +47 47335952 | bjorn@domos.no | www.domos.no
>> >> > _______________________________________________
>> >> > Starlink mailing list
>> >> > Starlink@lists.bufferbloat.net
>> >> > https://lists.bufferbloat.net/listinfo/starlink
>> >>
>> >
>> >
>> > --
>> > Bjørn Ivar Teigen
>> > Head of Research
>> > +47 47335952 | bjorn@domos.no | www.domos.no
>> > _______________________________________________
>> > Starlink mailing list
>> > Starlink@lists.bufferbloat.net
>> > https://lists.bufferbloat.net/listinfo/starlink
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC
>
>
>
>-- 
>FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>Dave Täht CEO, TekLibre, LLC

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

[-- Attachment #2: Type: text/html, Size: 11474 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Codel] [Starlink] Finite-Buffer M/G/1 Queues with Time and Space Priorities
  2022-07-29 15:29             ` Sebastian Moeller
@ 2022-07-29 17:08               ` Dave Taht
  2022-07-29 17:41                 ` Sebastian Moeller
  0 siblings, 1 reply; 9+ messages in thread
From: Dave Taht @ 2022-07-29 17:08 UTC (permalink / raw)
  To: Sebastian Moeller; +Cc: Bjørn Ivar Teigen, Dave Taht via Starlink, codel

On Fri, Jul 29, 2022 at 8:29 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>
> Hi Dave,
>
> I thought it was accepted knowledge that inter-packet delays across the internet are not reliable? Why are paced chirps immune from that problem? Or asked differently after all noise filtering required for robust and reliable operation over the existing internet, is this still going to be noticeably faster than current slow-start? In spite of the slow in the name doubling every RTT is IMHO already a pretty aggressive growth function....

I would like to be able to slow "slow start" based on interpacket arrival times.

Elsewhere cloudflare is enabling packet pacing at IW1000....
https://blog.cloudflare.com/when-the-window-is-not-fully-open-your-tcp-stack-is-doing-more-than-you-think/
Increasingly it feels like there is going to be one network for the
data center, and another for everyone else.

> Sidenote, you really think the second paper nailed PCs coffin shut?

No. It did outline the problems well.

With a shift in perspective, instead of trying to smooth over the
difference in behaviors in the underlying network to fit into an 90s
isochronous packet switched network model, in the face of overwhelming
differences in real world (overwhemingly wireless) behaviors...

... It seems easier to improve the transports than fix the routers at
this point. Much as bbr models policers today perhaps it's best to try
and model more network types.

>
> On 29 July 2022 16:55:42 CEST, Dave Taht <dave.taht@gmail.com> wrote:
>>
>> To give credit where credit is due, "packet chirping" had been
>> explored before in the context
>> of the l4s early marking ecn effort:
>>
>> https://www.bobbriscoe.net/presents/1802paced-chirps/1802paced-chirps.pdf
>>
>> It died here: https://bobbriscoe.net/projects/netsvc_i-f/chirp_pfldnet10.pdf
>> For now.
>>
>> On Thu, Jul 28, 2022 at 6:50 AM Dave Taht <dave.taht@gmail.com> wrote:
>>>
>>>
>>>  thx for the comments everyone!
>>>
>>>  On Thu, Jul 28, 2022 at 3:16 AM Bjørn Ivar Teigen via Starlink
>>>  <starlink@lists.bufferbloat.net> wrote:
>>>>
>>>>
>>>>  Very good point. Perhaps we can think of it as "at what point does delay equal loss?". As you say, too much latency (causing reordering for instance, or triggering an algorithm to smooth over missing data), is functionally equivalent to loss, and therefore buffering beyond that point is making things worse by delaying more traffic. The point at which this kicks in varies a lot between applications though, so some kind of classification might still make sense.
>>>>
>>>>  In a way, I think FQ_Codel does this classification implicitly by treating sparse and non-sparse flows differently.
>>>
>>>
>>>  the implicit flow analysis of fq_codel paper toke did is here:
>>>  http://www.diva-portal.org/smash/get/diva2:1251687/FULLTEXT01.pdf
>>>  It's a really nice feature!, and helps a lot when also applied to wifi
>>>  station scheduling.
>>>
>>>  I have sometimes thought that increasing to quantum to account for two
>>>  paced packets in a row (at high rates) was a good idea,
>>>  other times having paced transports analyze the "beat frequency" of
>>>  sending packets through fq_codel vs a vs the ack flow characteristics
>>>  (for example, filtering) might be useful.
>>>
>>>  Imagine that instead of sending packets on a fixed but increasing
>>>  pacing schedule within an RTT thusly
>>>
>>>  PPPPPPPPPP # IW10 burst
>>>  PP      PP      PP     PP    PP # often about 24 packets in what we
>>>  think the RTT is
>>>
>>>  PP  PP  PP  PP PP PP PP
>>>
>>>  PPPPPPPPPPPPPPPPPP
>>>
>>>  PPPPPPPPPPPPPPPPPPPPPPP stready buffering and ultimately a drop (and
>>>  yes this is inaccurate a model in a zillion ways, forgive me for
>>>  purposes of extrapolation in ascii text)
>>>
>>>  If instead...
>>>
>>>  You broke up the pacing within an RTT on an actual curve, selecting
>>>  some random segment out of PI as your actual starting point, say, at
>>>  3.14596 here.
>>>
>>>  PPPPPP PPPPP PPP
>>>  PPPPP PPPPPPPP
>>>  PPPPPPPPP PPP PP
>>>
>>>  3.14159265358979323846264338327950288419716939937510
>>>    58209749445923078164062862089986280348253421170679
>>>    82148086513282306647093844609550582231725359408128
>>>    48111745028410270193852110555964462294895493038196
>>>    44288109756659334461284756482337867831652712019091
>>>    45648566923460348610454326648213393607260249141273
>>>    72458700660631558817488152092096282925409171536436
>>>    78925903600113305305488204665213841469519415116094
>>>    33057270365759591953092186117381932611793105118548
>>>    07446237996274956735188575272489122793818301194912
>>>    98336733624406566430860213949463952247371907021798
>>>    60943702770539217176293176752384674818467669405132
>>>    00056812714526356082778577134275778960917363717872
>>>    14684409012249534301465495853710507922796892589235
>>>    42019956112129021960864034418159813629774771309960
>>>    51870721134999999837297804995105973173281609631859
>>>    50244594553469083026425223082533446850352619311881
>>>    71010003137838752886587533208381420617177669147303
>>>    59825349042875546873115956286388235378759375195778
>>>    18577805321712268066130019278766111959092164201989
>>>
>>>  what could you learn?
>>>
>>>
>>>>  - Bjørn Ivar
>>>>
>>>>  On Thu, 28 Jul 2022 at 11:55, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>
>>>>>
>>>>>  Hi all,
>>>>>
>>>>>
>>>>>>  On Jul 28, 2022, at 11:26, Bjørn Ivar Teigen via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>>
>>>>>>  Hi everyone,
>>>>>>
>>>>>>  Interesting paper Dave, I've got a few thoughts:
>>>>>>
>>>>>>  I like the split into delay-sensitive and loss-sensitive data.
>>>>>
>>>>>
>>>>>  However often real data is slightly different (e.g. not nicely either delay- or loss-sensitive)... e.g. for "real-time" games you have both delay and loss sensitivity (similarly for VoIP), however both can deal with occasional lost or delayed packets (if the delay is large enough to say be re-ordered with the temporally next data packet (voice sample in VoIP, server-tick update in games), that packet's data will likely not be evaluated at all). And large scale bulk downloads are both tolerant to delay and occasional loss. So if we think about a state space spanned by a delay and a loss-sensitivity axis, I predict most real traffic types will cluster somewhat around the diagonal (more or less closely).
>>>>>
>>>>>  About the rest of the paper I have nothing to contribute, since I did not spend the time to work though it.
>>>>>
>>>>>  Regards
>>>>>          Sebastian
>>>>>
>>>>>
>>>>>
>>>>>>  Different applications can have different needs and this split allows a queuing algorithm to take those differences into account. Not the first time I've seen this kind of split, but the other one I've seen used M/M/1/k queues (document here: https://www.researchgate.net/publication/2452029_A_Queueing_Theory_Model_that_Enables_Control_of_Loss_and_Delay_of_Traffic_at_a_Network_Switch)
>>>>>>
>>>>>>  That said, the performance metrics are derived from the embedded Markov chain of the queuing system. This means the metrics are averages over *all of time*, and thus there can be shorter periods (seconds, minutes, hours) of much worse than average performance. Therefore the conclusions of the paper should be taken with a grain of salt in my opinion.
>>>>>>
>>>>>>  On Thu, 28 Jul 2022 at 10:45, Bless, Roland (TM) via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>>  Hi Dave,
>>>>>>
>>>>>>  IMHO the problem w.r.t the applicability of most models from
>>>>>>  queueing theory is that they only work for load < 1, whereas
>>>>>>  we are using the network with load values ~1 (i.e., around one) due to
>>>>>>  congestion control feedback loops that drive the bottleneck link
>>>>>>  to saturation (unless you consider application limited traffic sources).
>>>>>>
>>>>>>  To be fair there are queuing theory models that include packet loss (which is the case for the paper Dave is asking about here), and these can work perfectly well for load > 1. Agree about the CC feedback loops affecting the results though. Even if the distributions are general in the paper, they still assume samples are IID which is not true for real networks. Feedback loops make real traffic self-correlated, which makes the short periods of worse than average performance worse and more frequent than IID models might suggest.
>>>>>>
>>>>>>  Regards,
>>>>>>  Bjørn Ivar
>>>>>>
>>>>>>
>>>>>>  Regards,
>>>>>>    Roland
>>>>>>
>>>>>>  On 27.07.22 at 17:34 Dave Taht via Starlink wrote:
>>>>>>>
>>>>>>>  Occasionally I pass along a recent paper that I don't understand in
>>>>>>>  the hope that someone can enlighten me.
>>>>>>>  This is one of those occasions, where I am trying to leverage what I
>>>>>>>  understand of existing FQ-codel behaviors against real traffic.
>>>>>>>
>>>>>>>  https://www.hindawi.com/journals/mpe/2022/4539940/
>>>>>>>
>>>>>>>  Compared to the previous study on finite-buffer M/M/1 priority queues
>>>>>>>  with time and space priority, where service times are identical and
>>>>>>>  exponentially distributed for both types of traffic, in our model we
>>>>>>>  assume that service times are different and are generally distributed
>>>>>>>  for different types of traffic. As a result, our model is more
>>>>>>>  suitable for the performance analysis of communication systems
>>>>>>>  accommodating multiple types of traffic with different service-time
>>>>>>>  distributions. For the proposed queueing model, we derive the
>>>>>>>  queue-length distributions, loss probabilities, and mean waiting times
>>>>>>>  of both types of traffic, as well as the push-out probability of
>>>>>>>  delay-sensitive traffic.
>>>>>>
>>>>>> ________________________________
>>>>>>  Starlink mailing list
>>>>>>  Starlink@lists.bufferbloat.net
>>>>>>  https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>>>
>>>>>>  --
>>>>>>  Bjørn Ivar Teigen
>>>>>>  Head of Research
>>>>>>  +47 47335952 | bjorn@domos.no | www.domos.no
>>>>>> ________________________________
>>>>>>  Starlink mailing list
>>>>>>  Starlink@lists.bufferbloat.net
>>>>>>  https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>>>
>>>>
>>>>
>>>>  --
>>>>  Bjørn Ivar Teigen
>>>>  Head of Research
>>>>  +47 47335952 | bjorn@domos.no | www.domos.no
>>>> ________________________________
>>>>  Starlink mailing list
>>>>  Starlink@lists.bufferbloat.net
>>>>  https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>>
>>>
>>>  --
>>>  FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>>>  Dave Täht CEO, TekLibre, LLC
>>
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.



-- 
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Codel] [Starlink] Finite-Buffer M/G/1 Queues with Time and Space Priorities
  2022-07-29 17:08               ` Dave Taht
@ 2022-07-29 17:41                 ` Sebastian Moeller
  0 siblings, 0 replies; 9+ messages in thread
From: Sebastian Moeller @ 2022-07-29 17:41 UTC (permalink / raw)
  To: Dave Täht; +Cc: Bjørn Ivar Teigen, Dave Taht via Starlink, codel

Hi Dave,


> On Jul 29, 2022, at 19:08, Dave Taht <dave.taht@gmail.com> wrote:
> 
> On Fri, Jul 29, 2022 at 8:29 AM Sebastian Moeller <moeller0@gmx.de> wrote:
>> 
>> Hi Dave,
>> 
>> I thought it was accepted knowledge that inter-packet delays across the internet are not reliable? Why are paced chirps immune from that problem? Or asked differently after all noise filtering required for robust and reliable operation over the existing internet, is this still going to be noticeably faster than current slow-start? In spite of the slow in the name doubling every RTT is IMHO already a pretty aggressive growth function....
> 
> I would like to be able to slow "slow start" based on interpacket arrival times.

	But that still tries to reason based on shitty^W noisy data... packet inter-arrival times are not good clean actionable data points, Cleaning them up is possible, sure, but will cost time, hence my question how much faster would a robust and reliable paced-chirping slow start be over "exponential growths"?
But I like your idea of "sith pacing", always in tight pairs to tickle out ACKs faster.
Side-note one way of sanity checking capacity estimates from packet pairs is to also send packets of different sizes but that still leaves the "it takes time to extract a decent estimate of the true capacity out of the junk data" problem pure packet inter-arrival times analysis has maybe reducing the time a bit. In the context of TCP it would probably also require deep surgery to release smaller than MSS segments, let alone what happens if carefully crafted packets hit GRO/GSO and friends.



> 
> Elsewhere cloudflare is enabling packet pacing at IW1000....
> https://blog.cloudflare.com/when-the-window-is-not-fully-open-your-tcp-stack-is-doing-more-than-you-think/

	Holy beverage, that is a lot, at that point they might consider increasing the gain factor instead of the offset from which to grow, so instead doubling evert RTT starting from 1000, start from 20 but grow by a factor of 4... I bet they are too chicken for that ;) (looking at coarse plots it looks like it takes only a few rounds ~ 7 for IW10_quadruple to leave IW1000_double in the dust)...

> Increasingly it feels like there is going to be one network for the
> data center, and another for everyone else.

	That happens if you hand data center guys the key to the internet, I guess...


>> Sidenote, you really think the second paper nailed PCs coffin shut?
> 
> No. It did outline the problems well.

	Knowing the sources, I have my reservations, they seems typically strong on the potential upsides and a bit dismissive/cavalier on the negative side-effects part. Also very little reception for paced chirping so far, but that can just mean that things are a bit slower in reaching production code and I got spoiled how fast codel/fq_codel made it into the kernel (both probably considerably less challenging that modifying the TCP state machine).

> With a shift in perspective, instead of trying to smooth over the
> difference in behaviors in the underlying network to fit into an 90s
> isochronous packet switched network model, in the face of overwhelming
> differences in real world (overwhemingly wireless) behaviors...

	Not convinced that is the case. I really believe the slow start is too slow argument is showing unsustainable expectations, as I said exponential window growths is pretty aggressive... the goal that every flow independent of size should achieve speeds close to true capacity simply seems to require an oracle scheduler on all senders injecting into the same path. Color me sceptical...

> ... It seems easier to improve the transports than fix the routers at
> this point. Much as bbr models policers today perhaps it's best to try
> and model more network types.

	+1; that seems preferable to all the approaches of putting more smarts into the network itself...

Thanks for the discussion ;)


Regards
	Sebastian


> 
>> 
>> On 29 July 2022 16:55:42 CEST, Dave Taht <dave.taht@gmail.com> wrote:
>>> 
>>> To give credit where credit is due, "packet chirping" had been
>>> explored before in the context
>>> of the l4s early marking ecn effort:
>>> 
>>> https://www.bobbriscoe.net/presents/1802paced-chirps/1802paced-chirps.pdf
>>> 
>>> It died here: https://bobbriscoe.net/projects/netsvc_i-f/chirp_pfldnet10.pdf
>>> For now.
>>> 
>>> On Thu, Jul 28, 2022 at 6:50 AM Dave Taht <dave.taht@gmail.com> wrote:
>>>> 
>>>> 
>>>> thx for the comments everyone!
>>>> 
>>>> On Thu, Jul 28, 2022 at 3:16 AM Bjørn Ivar Teigen via Starlink
>>>> <starlink@lists.bufferbloat.net> wrote:
>>>>> 
>>>>> 
>>>>> Very good point. Perhaps we can think of it as "at what point does delay equal loss?". As you say, too much latency (causing reordering for instance, or triggering an algorithm to smooth over missing data), is functionally equivalent to loss, and therefore buffering beyond that point is making things worse by delaying more traffic. The point at which this kicks in varies a lot between applications though, so some kind of classification might still make sense.
>>>>> 
>>>>> In a way, I think FQ_Codel does this classification implicitly by treating sparse and non-sparse flows differently.
>>>> 
>>>> 
>>>> the implicit flow analysis of fq_codel paper toke did is here:
>>>> http://www.diva-portal.org/smash/get/diva2:1251687/FULLTEXT01.pdf
>>>> It's a really nice feature!, and helps a lot when also applied to wifi
>>>> station scheduling.
>>>> 
>>>> I have sometimes thought that increasing to quantum to account for two
>>>> paced packets in a row (at high rates) was a good idea,
>>>> other times having paced transports analyze the "beat frequency" of
>>>> sending packets through fq_codel vs a vs the ack flow characteristics
>>>> (for example, filtering) might be useful.
>>>> 
>>>> Imagine that instead of sending packets on a fixed but increasing
>>>> pacing schedule within an RTT thusly
>>>> 
>>>> PPPPPPPPPP # IW10 burst
>>>> PP      PP      PP     PP    PP # often about 24 packets in what we
>>>> think the RTT is
>>>> 
>>>> PP  PP  PP  PP PP PP PP
>>>> 
>>>> PPPPPPPPPPPPPPPPPP
>>>> 
>>>> PPPPPPPPPPPPPPPPPPPPPPP stready buffering and ultimately a drop (and
>>>> yes this is inaccurate a model in a zillion ways, forgive me for
>>>> purposes of extrapolation in ascii text)
>>>> 
>>>> If instead...
>>>> 
>>>> You broke up the pacing within an RTT on an actual curve, selecting
>>>> some random segment out of PI as your actual starting point, say, at
>>>> 3.14596 here.
>>>> 
>>>> PPPPPP PPPPP PPP
>>>> PPPPP PPPPPPPP
>>>> PPPPPPPPP PPP PP
>>>> 
>>>> 3.14159265358979323846264338327950288419716939937510
>>>>   58209749445923078164062862089986280348253421170679
>>>>   82148086513282306647093844609550582231725359408128
>>>>   48111745028410270193852110555964462294895493038196
>>>>   44288109756659334461284756482337867831652712019091
>>>>   45648566923460348610454326648213393607260249141273
>>>>   72458700660631558817488152092096282925409171536436
>>>>   78925903600113305305488204665213841469519415116094
>>>>   33057270365759591953092186117381932611793105118548
>>>>   07446237996274956735188575272489122793818301194912
>>>>   98336733624406566430860213949463952247371907021798
>>>>   60943702770539217176293176752384674818467669405132
>>>>   00056812714526356082778577134275778960917363717872
>>>>   14684409012249534301465495853710507922796892589235
>>>>   42019956112129021960864034418159813629774771309960
>>>>   51870721134999999837297804995105973173281609631859
>>>>   50244594553469083026425223082533446850352619311881
>>>>   71010003137838752886587533208381420617177669147303
>>>>   59825349042875546873115956286388235378759375195778
>>>>   18577805321712268066130019278766111959092164201989
>>>> 
>>>> what could you learn?
>>>> 
>>>> 
>>>>> - Bjørn Ivar
>>>>> 
>>>>> On Thu, 28 Jul 2022 at 11:55, Sebastian Moeller <moeller0@gmx.de> wrote:
>>>>>> 
>>>>>> 
>>>>>> Hi all,
>>>>>> 
>>>>>> 
>>>>>>> On Jul 28, 2022, at 11:26, Bjørn Ivar Teigen via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>>> 
>>>>>>> Hi everyone,
>>>>>>> 
>>>>>>> Interesting paper Dave, I've got a few thoughts:
>>>>>>> 
>>>>>>> I like the split into delay-sensitive and loss-sensitive data.
>>>>>> 
>>>>>> 
>>>>>> However often real data is slightly different (e.g. not nicely either delay- or loss-sensitive)... e.g. for "real-time" games you have both delay and loss sensitivity (similarly for VoIP), however both can deal with occasional lost or delayed packets (if the delay is large enough to say be re-ordered with the temporally next data packet (voice sample in VoIP, server-tick update in games), that packet's data will likely not be evaluated at all). And large scale bulk downloads are both tolerant to delay and occasional loss. So if we think about a state space spanned by a delay and a loss-sensitivity axis, I predict most real traffic types will cluster somewhat around the diagonal (more or less closely).
>>>>>> 
>>>>>> About the rest of the paper I have nothing to contribute, since I did not spend the time to work though it.
>>>>>> 
>>>>>> Regards
>>>>>>         Sebastian
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> Different applications can have different needs and this split allows a queuing algorithm to take those differences into account. Not the first time I've seen this kind of split, but the other one I've seen used M/M/1/k queues (document here: https://www.researchgate.net/publication/2452029_A_Queueing_Theory_Model_that_Enables_Control_of_Loss_and_Delay_of_Traffic_at_a_Network_Switch)
>>>>>>> 
>>>>>>> That said, the performance metrics are derived from the embedded Markov chain of the queuing system. This means the metrics are averages over *all of time*, and thus there can be shorter periods (seconds, minutes, hours) of much worse than average performance. Therefore the conclusions of the paper should be taken with a grain of salt in my opinion.
>>>>>>> 
>>>>>>> On Thu, 28 Jul 2022 at 10:45, Bless, Roland (TM) via Starlink <starlink@lists.bufferbloat.net> wrote:
>>>>>>> Hi Dave,
>>>>>>> 
>>>>>>> IMHO the problem w.r.t the applicability of most models from
>>>>>>> queueing theory is that they only work for load < 1, whereas
>>>>>>> we are using the network with load values ~1 (i.e., around one) due to
>>>>>>> congestion control feedback loops that drive the bottleneck link
>>>>>>> to saturation (unless you consider application limited traffic sources).
>>>>>>> 
>>>>>>> To be fair there are queuing theory models that include packet loss (which is the case for the paper Dave is asking about here), and these can work perfectly well for load > 1. Agree about the CC feedback loops affecting the results though. Even if the distributions are general in the paper, they still assume samples are IID which is not true for real networks. Feedback loops make real traffic self-correlated, which makes the short periods of worse than average performance worse and more frequent than IID models might suggest.
>>>>>>> 
>>>>>>> Regards,
>>>>>>> Bjørn Ivar
>>>>>>> 
>>>>>>> 
>>>>>>> Regards,
>>>>>>>   Roland
>>>>>>> 
>>>>>>> On 27.07.22 at 17:34 Dave Taht via Starlink wrote:
>>>>>>>> 
>>>>>>>> Occasionally I pass along a recent paper that I don't understand in
>>>>>>>> the hope that someone can enlighten me.
>>>>>>>> This is one of those occasions, where I am trying to leverage what I
>>>>>>>> understand of existing FQ-codel behaviors against real traffic.
>>>>>>>> 
>>>>>>>> https://www.hindawi.com/journals/mpe/2022/4539940/
>>>>>>>> 
>>>>>>>> Compared to the previous study on finite-buffer M/M/1 priority queues
>>>>>>>> with time and space priority, where service times are identical and
>>>>>>>> exponentially distributed for both types of traffic, in our model we
>>>>>>>> assume that service times are different and are generally distributed
>>>>>>>> for different types of traffic. As a result, our model is more
>>>>>>>> suitable for the performance analysis of communication systems
>>>>>>>> accommodating multiple types of traffic with different service-time
>>>>>>>> distributions. For the proposed queueing model, we derive the
>>>>>>>> queue-length distributions, loss probabilities, and mean waiting times
>>>>>>>> of both types of traffic, as well as the push-out probability of
>>>>>>>> delay-sensitive traffic.
>>>>>>> 
>>>>>>> ________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> Bjørn Ivar Teigen
>>>>>>> Head of Research
>>>>>>> +47 47335952 | bjorn@domos.no | www.domos.no
>>>>>>> ________________________________
>>>>>>> Starlink mailing list
>>>>>>> Starlink@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Bjørn Ivar Teigen
>>>>> Head of Research
>>>>> +47 47335952 | bjorn@domos.no | www.domos.no
>>>>> ________________________________
>>>>> Starlink mailing list
>>>>> Starlink@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/starlink
>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>>>> Dave Täht CEO, TekLibre, LLC
>>> 
>>> 
>>> 
>>> 
>>> --
>>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>>> Dave Täht CEO, TekLibre, LLC
>> 
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> 
> 
> 
> -- 
> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
> Dave Täht CEO, TekLibre, LLC


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-07-29 17:41 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-27 15:34 [Codel] Finite-Buffer M/G/1 Queues with Time and Space Priorities Dave Taht
2022-07-27 17:23 ` Luca Muscariello
     [not found] ` <ae7a9f3e-6f94-1953-20dd-3d2140a4e49d@kit.edu>
     [not found]   ` <CAKf5G6L8_sKGtSQ-0RaiQdTkwd=7shjtKi9HKq0k+D+SXGu0=w@mail.gmail.com>
2022-07-28  9:55     ` [Codel] [Starlink] " Sebastian Moeller
     [not found]       ` <CAKf5G6L86pXFj7s-gB1VfiF3gctDpbd7Biw6mKhLfxXgRTRSzA@mail.gmail.com>
2022-07-28 13:50         ` Dave Taht
2022-07-29 14:55           ` Dave Taht
2022-07-29 15:29             ` Sebastian Moeller
2022-07-29 15:29             ` Sebastian Moeller
2022-07-29 17:08               ` Dave Taht
2022-07-29 17:41                 ` Sebastian Moeller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox