[Starlink] Finite-Buffer M/G/1 Queues with Time and Space Priorities

Dave Taht dave.taht at gmail.com
Fri Jul 29 13:08:24 EDT 2022


On Fri, Jul 29, 2022 at 8:29 AM Sebastian Moeller <moeller0 at gmx.de> wrote:
>
> Hi Dave,
>
> I thought it was accepted knowledge that inter-packet delays across the internet are not reliable? Why are paced chirps immune from that problem? Or asked differently after all noise filtering required for robust and reliable operation over the existing internet, is this still going to be noticeably faster than current slow-start? In spite of the slow in the name doubling every RTT is IMHO already a pretty aggressive growth function....

I would like to be able to slow "slow start" based on interpacket arrival times.

Elsewhere cloudflare is enabling packet pacing at IW1000....
https://blog.cloudflare.com/when-the-window-is-not-fully-open-your-tcp-stack-is-doing-more-than-you-think/
Increasingly it feels like there is going to be one network for the
data center, and another for everyone else.

> Sidenote, you really think the second paper nailed PCs coffin shut?

No. It did outline the problems well.

With a shift in perspective, instead of trying to smooth over the
difference in behaviors in the underlying network to fit into an 90s
isochronous packet switched network model, in the face of overwhelming
differences in real world (overwhemingly wireless) behaviors...

... It seems easier to improve the transports than fix the routers at
this point. Much as bbr models policers today perhaps it's best to try
and model more network types.

>
> On 29 July 2022 16:55:42 CEST, Dave Taht <dave.taht at gmail.com> wrote:
>>
>> To give credit where credit is due, "packet chirping" had been
>> explored before in the context
>> of the l4s early marking ecn effort:
>>
>> https://www.bobbriscoe.net/presents/1802paced-chirps/1802paced-chirps.pdf
>>
>> It died here: https://bobbriscoe.net/projects/netsvc_i-f/chirp_pfldnet10.pdf
>> For now.
>>
>> On Thu, Jul 28, 2022 at 6:50 AM Dave Taht <dave.taht at gmail.com> wrote:
>>>
>>>
>>>  thx for the comments everyone!
>>>
>>>  On Thu, Jul 28, 2022 at 3:16 AM Bjørn Ivar Teigen via Starlink
>>>  <starlink at lists.bufferbloat.net> wrote:
>>>>
>>>>
>>>>  Very good point. Perhaps we can think of it as "at what point does delay equal loss?". As you say, too much latency (causing reordering for instance, or triggering an algorithm to smooth over missing data), is functionally equivalent to loss, and therefore buffering beyond that point is making things worse by delaying more traffic. The point at which this kicks in varies a lot between applications though, so some kind of classification might still make sense.
>>>>
>>>>  In a way, I think FQ_Codel does this classification implicitly by treating sparse and non-sparse flows differently.
>>>
>>>
>>>  the implicit flow analysis of fq_codel paper toke did is here:
>>>  http://www.diva-portal.org/smash/get/diva2:1251687/FULLTEXT01.pdf
>>>  It's a really nice feature!, and helps a lot when also applied to wifi
>>>  station scheduling.
>>>
>>>  I have sometimes thought that increasing to quantum to account for two
>>>  paced packets in a row (at high rates) was a good idea,
>>>  other times having paced transports analyze the "beat frequency" of
>>>  sending packets through fq_codel vs a vs the ack flow characteristics
>>>  (for example, filtering) might be useful.
>>>
>>>  Imagine that instead of sending packets on a fixed but increasing
>>>  pacing schedule within an RTT thusly
>>>
>>>  PPPPPPPPPP # IW10 burst
>>>  PP      PP      PP     PP    PP # often about 24 packets in what we
>>>  think the RTT is
>>>
>>>  PP  PP  PP  PP PP PP PP
>>>
>>>  PPPPPPPPPPPPPPPPPP
>>>
>>>  PPPPPPPPPPPPPPPPPPPPPPP stready buffering and ultimately a drop (and
>>>  yes this is inaccurate a model in a zillion ways, forgive me for
>>>  purposes of extrapolation in ascii text)
>>>
>>>  If instead...
>>>
>>>  You broke up the pacing within an RTT on an actual curve, selecting
>>>  some random segment out of PI as your actual starting point, say, at
>>>  3.14596 here.
>>>
>>>  PPPPPP PPPPP PPP
>>>  PPPPP PPPPPPPP
>>>  PPPPPPPPP PPP PP
>>>
>>>  3.14159265358979323846264338327950288419716939937510
>>>    58209749445923078164062862089986280348253421170679
>>>    82148086513282306647093844609550582231725359408128
>>>    48111745028410270193852110555964462294895493038196
>>>    44288109756659334461284756482337867831652712019091
>>>    45648566923460348610454326648213393607260249141273
>>>    72458700660631558817488152092096282925409171536436
>>>    78925903600113305305488204665213841469519415116094
>>>    33057270365759591953092186117381932611793105118548
>>>    07446237996274956735188575272489122793818301194912
>>>    98336733624406566430860213949463952247371907021798
>>>    60943702770539217176293176752384674818467669405132
>>>    00056812714526356082778577134275778960917363717872
>>>    14684409012249534301465495853710507922796892589235
>>>    42019956112129021960864034418159813629774771309960
>>>    51870721134999999837297804995105973173281609631859
>>>    50244594553469083026425223082533446850352619311881
>>>    71010003137838752886587533208381420617177669147303
>>>    59825349042875546873115956286388235378759375195778
>>>    18577805321712268066130019278766111959092164201989
>>>
>>>  what could you learn?
>>>
>>>
>>>>  - Bjørn Ivar
>>>>
>>>>  On Thu, 28 Jul 2022 at 11:55, Sebastian Moeller <moeller0 at gmx.de> wrote:
>>>>>
>>>>>
>>>>>  Hi all,
>>>>>
>>>>>
>>>>>>  On Jul 28, 2022, at 11:26, Bjørn Ivar Teigen via Starlink <starlink at lists.bufferbloat.net> wrote:
>>>>>>
>>>>>>  Hi everyone,
>>>>>>
>>>>>>  Interesting paper Dave, I've got a few thoughts:
>>>>>>
>>>>>>  I like the split into delay-sensitive and loss-sensitive data.
>>>>>
>>>>>
>>>>>  However often real data is slightly different (e.g. not nicely either delay- or loss-sensitive)... e.g. for "real-time" games you have both delay and loss sensitivity (similarly for VoIP), however both can deal with occasional lost or delayed packets (if the delay is large enough to say be re-ordered with the temporally next data packet (voice sample in VoIP, server-tick update in games), that packet's data will likely not be evaluated at all). And large scale bulk downloads are both tolerant to delay and occasional loss. So if we think about a state space spanned by a delay and a loss-sensitivity axis, I predict most real traffic types will cluster somewhat around the diagonal (more or less closely).
>>>>>
>>>>>  About the rest of the paper I have nothing to contribute, since I did not spend the time to work though it.
>>>>>
>>>>>  Regards
>>>>>          Sebastian
>>>>>
>>>>>
>>>>>
>>>>>>  Different applications can have different needs and this split allows a queuing algorithm to take those differences into account. Not the first time I've seen this kind of split, but the other one I've seen used M/M/1/k queues (document here: https://www.researchgate.net/publication/2452029_A_Queueing_Theory_Model_that_Enables_Control_of_Loss_and_Delay_of_Traffic_at_a_Network_Switch)
>>>>>>
>>>>>>  That said, the performance metrics are derived from the embedded Markov chain of the queuing system. This means the metrics are averages over *all of time*, and thus there can be shorter periods (seconds, minutes, hours) of much worse than average performance. Therefore the conclusions of the paper should be taken with a grain of salt in my opinion.
>>>>>>
>>>>>>  On Thu, 28 Jul 2022 at 10:45, Bless, Roland (TM) via Starlink <starlink at lists.bufferbloat.net> wrote:
>>>>>>  Hi Dave,
>>>>>>
>>>>>>  IMHO the problem w.r.t the applicability of most models from
>>>>>>  queueing theory is that they only work for load < 1, whereas
>>>>>>  we are using the network with load values ~1 (i.e., around one) due to
>>>>>>  congestion control feedback loops that drive the bottleneck link
>>>>>>  to saturation (unless you consider application limited traffic sources).
>>>>>>
>>>>>>  To be fair there are queuing theory models that include packet loss (which is the case for the paper Dave is asking about here), and these can work perfectly well for load > 1. Agree about the CC feedback loops affecting the results though. Even if the distributions are general in the paper, they still assume samples are IID which is not true for real networks. Feedback loops make real traffic self-correlated, which makes the short periods of worse than average performance worse and more frequent than IID models might suggest.
>>>>>>
>>>>>>  Regards,
>>>>>>  Bjørn Ivar
>>>>>>
>>>>>>
>>>>>>  Regards,
>>>>>>    Roland
>>>>>>
>>>>>>  On 27.07.22 at 17:34 Dave Taht via Starlink wrote:
>>>>>>>
>>>>>>>  Occasionally I pass along a recent paper that I don't understand in
>>>>>>>  the hope that someone can enlighten me.
>>>>>>>  This is one of those occasions, where I am trying to leverage what I
>>>>>>>  understand of existing FQ-codel behaviors against real traffic.
>>>>>>>
>>>>>>>  https://www.hindawi.com/journals/mpe/2022/4539940/
>>>>>>>
>>>>>>>  Compared to the previous study on finite-buffer M/M/1 priority queues
>>>>>>>  with time and space priority, where service times are identical and
>>>>>>>  exponentially distributed for both types of traffic, in our model we
>>>>>>>  assume that service times are different and are generally distributed
>>>>>>>  for different types of traffic. As a result, our model is more
>>>>>>>  suitable for the performance analysis of communication systems
>>>>>>>  accommodating multiple types of traffic with different service-time
>>>>>>>  distributions. For the proposed queueing model, we derive the
>>>>>>>  queue-length distributions, loss probabilities, and mean waiting times
>>>>>>>  of both types of traffic, as well as the push-out probability of
>>>>>>>  delay-sensitive traffic.
>>>>>>
>>>>>> ________________________________
>>>>>>  Starlink mailing list
>>>>>>  Starlink at lists.bufferbloat.net
>>>>>>  https://lists.bufferbloat.net/listinfo/starlink
>>>>>>
>>>>>>
>>>>>>  --
>>>>>>  Bjørn Ivar Teigen
>>>>>>  Head of Research
>>>>>>  +47 47335952 | bjorn at domos.no | www.domos.no
>>>>>> ________________________________
>>>>>>  Starlink mailing list
>>>>>>  Starlink at lists.bufferbloat.net
>>>>>>  https://lists.bufferbloat.net/listinfo/starlink
>>>>>
>>>>>
>>>>
>>>>
>>>>  --
>>>>  Bjørn Ivar Teigen
>>>>  Head of Research
>>>>  +47 47335952 | bjorn at domos.no | www.domos.no
>>>> ________________________________
>>>>  Starlink mailing list
>>>>  Starlink at lists.bufferbloat.net
>>>>  https://lists.bufferbloat.net/listinfo/starlink
>>>
>>>
>>>
>>>
>>>  --
>>>  FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>>>  Dave Täht CEO, TekLibre, LLC
>>
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.



-- 
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC


More information about the Starlink mailing list