[Bloat] [Rpm] so great to see ISPs that care
Sebastian Moeller
moeller0 at gmx.de
Sun Mar 12 17:37:10 EDT 2023
Hi Bob,
> On Mar 12, 2023, at 22:02, rjmcmahon <rjmcmahon at rjmcmahon.com> wrote:
>
> iperf 2 uses responses per second and also provides the bounce back times as well as one way delays.
>
> The hypothesis is that network engineers have to fix KPI issues, including latency, ahead of shipping products.
>
> Asking companies to act on consumer complaints is way too late. It's also extremely costly. Those running Amazon customer service can explain how these consumer calls about their devices cause things like device returns (as that's all the call support can provide.) This wastes energy to physically ship things back, causes a stack of working items that now go to ewaste, etc.
>
> It's really on network operators, suppliers and device mfgs to get ahead of this years before consumers get their stuff.
[SM] As much as I like to tinker, I agree with you to make an impact, doing this one network at a time scaled poorly, and a joined effort seems way more effective and yes that better started yesterday than today ;)
>
> As a side note, many devices select their WiFi chanspec (AP channel+) based on the strongest RSSI. The network paths should be based on KPIs like low latency. Strong signal just means an AP is yelling to loudly and interfering with the neighbors. Try the optimal AP chanspec that has 10dB separation per spatial dimension and the whole apartment complex would be better for it.
[SM] Sidenote, with DSL ISP are actively optimizing the per link transmit power in both directions. They seem to do this partially to save energy/cost and partially to optimize group transmission rates. Ever since vectoring was introduced to deal with crosstalk the signal fate of all links connected to a DSLAM agare a partial common fate. In the DSLAM to CPE direction the DSLAM will "pre-distort" each lines signal dynamically so that after the unavoidable crosstalk interaction between the lines the resulting "pulse shapes" are clean(er) again when they reach the CPE (I am simplifying but the principle holds). In CPE to DSLAM direction that is not possible (since there is no entity seeing all concurrent transmissions and hence no possibility to calculate or apply the pre-distortion, so the method of choice is to simply try to decode all lines together, and to help with that CPE transmit power sees to be adjusted that signal level at the DSLAM is equalized. (For very short links that often results in less than maximally possible capacity, but over the whole set of links that method seems to increase total capacity). I would guess in theory these methods are also applied on RF links (except RF with its 3D propagation is probably way more challenging).
>
> We're so focused on buffer bloat we're ignoring everything else where incremental engineering has led to poor products & offerings.
>
> [rjmcmahon at ryzen3950 iperf2-code]$ iperf -c 192.168.1.72 -i 1 -e --bounceback --trip-times
> ------------------------------------------------------------
> Client connecting to 192.168.1.72, TCP port 5001 with pid 3123814 (1 flows)
> Write buffer size: 100 Byte
> Bursting: 100 Byte writes 10 times every 1.00 second(s)
> Bounce-back test (size= 100 Byte) (server hold req=0 usecs & tcp_quickack)
> TOS set to 0x0 and nodelay (Nagle off)
> TCP window size: 16.0 KByte (default)
> Event based writes (pending queue watermark at 16384 bytes)
> ------------------------------------------------------------
> [ 1] local 192.168.1.69%enp4s0 port 41336 connected with 192.168.1.72 port 5001 (prefetch=16384) (bb w/quickack len/hold=100/0) (trip-times) (sock=3) (icwnd/mss/irtt=14/1448/284) (ct=0.33 ms) on 2023-03-12 14:01:24.820 (PDT)
> [ ID] Interval Transfer Bandwidth BB cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS
> [ 1] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.311/0.209/0.755/0.159 ms 0 14K/202 us 3220 rps
> [ 1] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.254/0.180/0.335/0.051 ms 0 14K/210 us 3934 rps
> [ 1] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.266/0.168/0.468/0.088 ms 0 14K/210 us 3754 rps
> [ 1] 3.00-4.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.294/0.184/0.442/0.078 ms 0 14K/233 us 3396 rps
> [ 1] 4.00-5.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.263/0.150/0.427/0.077 ms 0 14K/215 us 3802 rps
> [ 1] 5.00-6.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.325/0.237/0.409/0.056 ms 0 14K/258 us 3077 rps
> [ 1] 6.00-7.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.259/0.165/0.410/0.077 ms 0 14K/219 us 3857 rps
> [ 1] 7.00-8.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.277/0.193/0.415/0.068 ms 0 14K/224 us 3608 rps
> [ 1] 8.00-9.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.292/0.206/0.465/0.072 ms 0 14K/231 us 3420 rps
> [ 1] 9.00-10.00 sec 1.95 KBytes 16.0 Kbits/sec 10=0.256/0.157/0.439/0.082 ms 0 14K/211 us 3908 rps
> [ 1] 0.00-10.01 sec 19.5 KBytes 16.0 Kbits/sec 100=0.280/0.150/0.755/0.085 ms 0 14K/1033 us 3573 rps
> [ 1] 0.00-10.01 sec OWD Delays (ms) Cnt=100 To=0.169/0.074/0.318/0.056 From=0.105/0.055/0.162/0.024 Asymmetry=0.065/0.000/0.172/0.049 3573 rps
> [ 1] 0.00-10.01 sec BB8(f)-PDF: bin(w=100us):cnt(100)=2:14,3:57,4:20,5:8,8:1 (5.00/95.00/99.7%=2/5/8,Outliers=0,obl/obu=0/0)
>
>
> Bob
>> Dave,
>> your presentation was awesome, I fully agree with you ;). I very much
>> liked your practical funnel demonstration which was boiled down to the
>> bare minimum (I only partly asked myself, will the liquid spill in in
>> your laptops keyboard, and if so is it water-proof, but you clearly
>> had rehearsed/tried that before).
>> BTW, I always have to think of this
>> h++ps://www.youtube.com/watch?v=R7yfISlGLNU somehow when you present
>> live from the marina ;)
>> I am still not through watching all of the presentations and panels,
>> but can already say, team L4S continues to over-promise and
>> under-deliver, but Koen's presentation itself was done well and might
>> (sadly) convince people to buy-in into L4(S) = 2L2L = too little, too
>> late.
>> Stuart's RPM presentation was great, making a convincing point.
>> (Except for pitching L4S and LLD as "solutions", I will accept them as
>> a step in the right direction, but why not go in all the way and
>> embrace proper scheduling?)
>> In detail though, I am not fully convinced about the decision of
>> taking the inverse of delay increase as singular measure here as I
>> consider that as a bit of a squandered opportunity at public
>> outreach/education and as comparing idle and working RPM is
>> non-intuitive, while idle and working RTT can immediately subtracted
>> to see the extent of the queueing damage in actionable terms.
>> Try the same with RPM values:
>> 123-1234567:~ user$ networkQuality -v
>> ==== SUMMARY ====
>> Upload capacity: 22.208 Mbps
>> Download capacity: 88.054 Mbps
>> Upload flows: 12
>> Download flows: 12
>> Responsiveness: High (2622 RPM)
>> Base RTT: 18
>> Start: 3/12/23, 21:00:58
>> End: 3/12/23, 21:01:08
>> OS Version: Version 12.6.3 (Build 21G419)
>> here we can divide 60 [sec/minute] * 1000 [ms/sec] by the RPM [1/min]
>> to get: 60000/2622 = 22.88 ms loaded delay and subtract the base RTT
>> of 18 for 60000/2622 - 18 = 4.88 ~5ms of loaded delay which is a
>> useful quantity when managing a delay budget (this test was performed
>> over wired ethernet with competent AQM and traffic shaping on the
>> link, so no surprise about the outcome there). Let's look at the
>> reverse and convert the base RTT into a base RPM score instead:
>> 6000/18 = 333 rpm, what exactly does the delta RPM of 2622-333 =
>> 2289rpm now tell us about the difference between idle and working
>> conditions? [Well, since conversion is not witchcraft, I will be fine
>> as will other interested in actual evoked delay, but we could have
>> gotten a better measure*]
>> And all for the somewhat unhelpful car analogy... (it is not that for
>> internal combustion engines bigger is necessarily better for RPM,
>> either for torque or fuel efficiency).
>> I guess that ship has sailed though and RPM it is
>> *) Stuart notes that milliseconds and Hertz sound to sciency, but they
>> could simply have given the delay increase in milliseconds a fancier
>> name to solve that specific problem...
>>> On Mar 12, 2023, at 20:31, Dave Taht via Rpm <rpm at lists.bufferbloat.net> wrote:
>>> https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/comment/jbypj0z/?context=3
>>> --
>>> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
>>> Dave Täht CEO, TekLibre, LLC
>>> _______________________________________________
>>> Rpm mailing list
>>> Rpm at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/rpm
>> _______________________________________________
>> Rpm mailing list
>> Rpm at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
More information about the Bloat
mailing list