From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 6A6F83B29D; Sun, 12 Mar 2023 17:37:16 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.de; s=s31663417; t=1678657031; i=moeller0@gmx.de; bh=eHnpvgp7WIJOFe0yQRTFVqFf0sQv2H1kA/xBnR9nXhI=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=A9AcGtUzPkkCoZRGhBL5qv7HWbI6e3BnOLeiwNFMe8rhQe4uGNularE4EJuPfFS65 NglYsVDJlYNdZREqTjXnjzvQdplkJ1qfUljxNSMpHKmhoOwTn3kOZzmb6OqNfxxyb4 jpg0A/IWlJ9R/+VLggNNJJM/DwvtkfI/eU18Gh8TW6nUKxj7I6Q6itUldlTSydG+1g csSEyU2NaiplEEcKwHvgDByUU+APbo+MOBjP4JdjVK88CsmGoRMVEGTkdrBBTIkJZL mkVhPoB6jYF+uzDdvy3iCmaTZlU9xor0G0cinkCGiNoD7Wu5B8Zl7axocgDWVJ6Yl/ /jwMTtkiD1ulQ== X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a Received: from smtpclient.apple ([77.8.69.99]) by mail.gmx.net (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MgvrL-1qCCqI37VN-00hKTv; Sun, 12 Mar 2023 22:37:11 +0100 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.2\)) From: Sebastian Moeller In-Reply-To: <1de20b8357ea243f9faa1cb2c0295cb5@rjmcmahon.com> Date: Sun, 12 Mar 2023 22:37:10 +0100 Cc: =?utf-8?Q?Dave_T=C3=A4ht?= , Dave Taht via Starlink , Rpm , Cake List , bloat Content-Transfer-Encoding: quoted-printable Message-Id: <7E02E03F-F831-49E0-8B37-C1169F939B5A@gmx.de> References: <1de20b8357ea243f9faa1cb2c0295cb5@rjmcmahon.com> To: rjmcmahon X-Mailer: Apple Mail (2.3696.120.41.1.2) X-Provags-ID: V03:K1:bacUp4rLwcEOjqa/rAXAyhQ/PN8MEA32YiHAFtk+Cf/drRFG4ZL MmxZqpPFfvOSES0yCYkJ8iZDKKfJpOptf4atiP+3T+5QW7GFmNWii3zOtgfBrHdqgSQZ6Ed Y6tcavflaIeL3Hk4KAtVUX4gApazu2iilHwscR7Gn81Uydv3Fep0fsAWwU+y1h4HgPFUSOp ag6xWLD/9pEa50U86TMVA== X-Spam-Flag: NO UI-OutboundReport: notjunk:1;M01:P0:imi9Y9Mz1/E=;Vhazpf3Afh5DGT8r3Pw57FfXEi6 KJ4hIwAeIC/mjs7TgzUZ/L3a+8OS6dZ3/7LLC/jfpz0e2Ox3mKGYEbxWgku9DExpw4ldYhJIB b47eW0kAsmnKgNgViQwT/nIHrauEnS7f5PHPlsztZpTdvwHK8wTm6S40ggPd6uPgoJ8+yfG5q FGwgXzTQrJN7Za6N5oIrmGkmJR9C7s+eX/L3PBugt5LMST5hmA88VQAB1rAPfhbXNdKE67Nqx /4gjKU1uf9beuTUDouyjr0CNz242CVNid+U3/iYD4/fSm8KdNboPBcdclalbLBGUSAIGg88bf HgD8/o4PvxoT/Zox7a2Q31MvN5EmOXa28RDV2ToNVXRE4Yfw0C8Yhg/ICb7s0rHZtkz1cJGup J26aCZ1ozwaB2kfAJAugkPh4wEJ4vKLSbMltOXLnUAAlQROA+J6Ende9EqGmZPuFZxKQrvTYG XAx5uOKEvxaCRS1IDRfsl91HrvLyBfmYgbQASsR/avYaRyQV0LhS04EoCsKD2to3ubE1P7fyS bz5j3dnZzMZp9OAC26R7ZADbjoEtkqh94qWjSPdNE+8JTIEMMRntwSYN/nu6gTsCO+gcMdu7h eO84xMoGIaPphT1gr4S3eLxObxHWWcCiG4Q2ea/V3X3g7nwFg9SRRN7Ig2ADXjOVsGnCFzTfs 5tYoJ86vbYXZ5wwAgYlfYiFG86jv6bP8kwkiSwpgt60jUF8Kjgt3MW4vcGohpTakwwV5H+y60 LBlpR6n3lt8/eE+EO9pHOtVXAQivRNugTF2DIm0zarhLp/3JlqvcIdOgjVNNunoRDoEEIvUhM ujKzXF/5VbvQIOodND5sT7e9kffMU04z6QxJVEmzjIqnBDV7fAinnHT/d9hRbOmRBfI9j0Eoo hOUeeDIGKa21+jsJ1WzGo4XdKbaIWbbvjQvbljMKBmcrZgK3W4n9rGnrtYj49ob/WvodfBRgb 5dciPgJ+suBNX/J/1osBb/SQkzs= Subject: Re: [Starlink] [Rpm] so great to see ISPs that care X-BeenThere: starlink@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Starlink has bufferbloat. Bad." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 12 Mar 2023 21:37:16 -0000 Hi Bob, > On Mar 12, 2023, at 22:02, rjmcmahon wrote: >=20 > iperf 2 uses responses per second and also provides the bounce back = times as well as one way delays. >=20 > The hypothesis is that network engineers have to fix KPI issues, = including latency, ahead of shipping products. >=20 > Asking companies to act on consumer complaints is way too late. It's = also extremely costly. Those running Amazon customer service can explain = how these consumer calls about their devices cause things like device = returns (as that's all the call support can provide.) This wastes energy = to physically ship things back, causes a stack of working items that now = go to ewaste, etc. >=20 > It's really on network operators, suppliers and device mfgs to get = ahead of this years before consumers get their stuff. [SM] As much as I like to tinker, I agree with you to make an = impact, doing this one network at a time scaled poorly, and a joined = effort seems way more effective and yes that better started yesterday = than today ;) >=20 > As a side note, many devices select their WiFi chanspec (AP channel+) = based on the strongest RSSI. The network paths should be based on KPIs = like low latency. Strong signal just means an AP is yelling to loudly = and interfering with the neighbors. Try the optimal AP chanspec that has = 10dB separation per spatial dimension and the whole apartment complex = would be better for it. [SM] Sidenote, with DSL ISP are actively optimizing the per link = transmit power in both directions. They seem to do this partially to = save energy/cost and partially to optimize group transmission rates. = Ever since vectoring was introduced to deal with crosstalk the signal = fate of all links connected to a DSLAM agare a partial common fate. In = the DSLAM to CPE direction the DSLAM will "pre-distort" each lines = signal dynamically so that after the unavoidable crosstalk interaction = between the lines the resulting "pulse shapes" are clean(er) again when = they reach the CPE (I am simplifying but the principle holds). In CPE to = DSLAM direction that is not possible (since there is no entity seeing = all concurrent transmissions and hence no possibility to calculate or = apply the pre-distortion, so the method of choice is to simply try to = decode all lines together, and to help with that CPE transmit power sees = to be adjusted that signal level at the DSLAM is equalized. (For very = short links that often results in less than maximally possible capacity, = but over the whole set of links that method seems to increase total = capacity). I would guess in theory these methods are also applied on RF = links (except RF with its 3D propagation is probably way more = challenging). >=20 > We're so focused on buffer bloat we're ignoring everything else where = incremental engineering has led to poor products & offerings. >=20 > [rjmcmahon@ryzen3950 iperf2-code]$ iperf -c 192.168.1.72 -i 1 -e = --bounceback --trip-times > ------------------------------------------------------------ > Client connecting to 192.168.1.72, TCP port 5001 with pid 3123814 (1 = flows) > Write buffer size: 100 Byte > Bursting: 100 Byte writes 10 times every 1.00 second(s) > Bounce-back test (size=3D 100 Byte) (server hold req=3D0 usecs & = tcp_quickack) > TOS set to 0x0 and nodelay (Nagle off) > TCP window size: 16.0 KByte (default) > Event based writes (pending queue watermark at 16384 bytes) > ------------------------------------------------------------ > [ 1] local 192.168.1.69%enp4s0 port 41336 connected with 192.168.1.72 = port 5001 (prefetch=3D16384) (bb w/quickack len/hold=3D100/0) = (trip-times) (sock=3D3) (icwnd/mss/irtt=3D14/1448/284) (ct=3D0.33 ms) on = 2023-03-12 14:01:24.820 (PDT) > [ ID] Interval Transfer Bandwidth BB = cnt=3Davg/min/max/stdev Rtry Cwnd/RTT RPS > [ 1] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.311/0.209/0.755/0.159 ms 0 14K/202 us 3220 rps > [ 1] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.254/0.180/0.335/0.051 ms 0 14K/210 us 3934 rps > [ 1] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.266/0.168/0.468/0.088 ms 0 14K/210 us 3754 rps > [ 1] 3.00-4.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.294/0.184/0.442/0.078 ms 0 14K/233 us 3396 rps > [ 1] 4.00-5.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.263/0.150/0.427/0.077 ms 0 14K/215 us 3802 rps > [ 1] 5.00-6.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.325/0.237/0.409/0.056 ms 0 14K/258 us 3077 rps > [ 1] 6.00-7.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.259/0.165/0.410/0.077 ms 0 14K/219 us 3857 rps > [ 1] 7.00-8.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.277/0.193/0.415/0.068 ms 0 14K/224 us 3608 rps > [ 1] 8.00-9.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.292/0.206/0.465/0.072 ms 0 14K/231 us 3420 rps > [ 1] 9.00-10.00 sec 1.95 KBytes 16.0 Kbits/sec = 10=3D0.256/0.157/0.439/0.082 ms 0 14K/211 us 3908 rps > [ 1] 0.00-10.01 sec 19.5 KBytes 16.0 Kbits/sec = 100=3D0.280/0.150/0.755/0.085 ms 0 14K/1033 us 3573 rps > [ 1] 0.00-10.01 sec OWD Delays (ms) Cnt=3D100 = To=3D0.169/0.074/0.318/0.056 From=3D0.105/0.055/0.162/0.024 = Asymmetry=3D0.065/0.000/0.172/0.049 3573 rps > [ 1] 0.00-10.01 sec BB8(f)-PDF: = bin(w=3D100us):cnt(100)=3D2:14,3:57,4:20,5:8,8:1 = (5.00/95.00/99.7%=3D2/5/8,Outliers=3D0,obl/obu=3D0/0) >=20 >=20 > Bob >> Dave, >> your presentation was awesome, I fully agree with you ;). I very much >> liked your practical funnel demonstration which was boiled down to = the >> bare minimum (I only partly asked myself, will the liquid spill in in >> your laptops keyboard, and if so is it water-proof, but you clearly >> had rehearsed/tried that before). >> BTW, I always have to think of this >> h++ps://www.youtube.com/watch?v=3DR7yfISlGLNU somehow when you = present >> live from the marina ;) >> I am still not through watching all of the presentations and panels, >> but can already say, team L4S continues to over-promise and >> under-deliver, but Koen's presentation itself was done well and might >> (sadly) convince people to buy-in into L4(S) =3D 2L2L =3D too little, = too >> late. >> Stuart's RPM presentation was great, making a convincing point. >> (Except for pitching L4S and LLD as "solutions", I will accept them = as >> a step in the right direction, but why not go in all the way and >> embrace proper scheduling?) >> In detail though, I am not fully convinced about the decision of >> taking the inverse of delay increase as singular measure here as I >> consider that as a bit of a squandered opportunity at public >> outreach/education and as comparing idle and working RPM is >> non-intuitive, while idle and working RTT can immediately subtracted >> to see the extent of the queueing damage in actionable terms. >> Try the same with RPM values: >> 123-1234567:~ user$ networkQuality -v >> =3D=3D=3D=3D SUMMARY =3D=3D=3D=3D >> Upload capacity: 22.208 Mbps >> Download capacity: 88.054 Mbps >> Upload flows: 12 >> Download flows: 12 >> Responsiveness: High (2622 RPM) >> Base RTT: 18 >> Start: 3/12/23, 21:00:58 >> End: 3/12/23, 21:01:08 >> OS Version: Version 12.6.3 (Build 21G419) >> here we can divide 60 [sec/minute] * 1000 [ms/sec] by the RPM [1/min] >> to get: 60000/2622 =3D 22.88 ms loaded delay and subtract the base = RTT >> of 18 for 60000/2622 - 18 =3D 4.88 ~5ms of loaded delay which is a >> useful quantity when managing a delay budget (this test was performed >> over wired ethernet with competent AQM and traffic shaping on the >> link, so no surprise about the outcome there). Let's look at the >> reverse and convert the base RTT into a base RPM score instead: >> 6000/18 =3D 333 rpm, what exactly does the delta RPM of 2622-333 =3D >> 2289rpm now tell us about the difference between idle and working >> conditions? [Well, since conversion is not witchcraft, I will be fine >> as will other interested in actual evoked delay, but we could have >> gotten a better measure*] >> And all for the somewhat unhelpful car analogy... (it is not that for >> internal combustion engines bigger is necessarily better for RPM, >> either for torque or fuel efficiency). >> I guess that ship has sailed though and RPM it is >> *) Stuart notes that milliseconds and Hertz sound to sciency, but = they >> could simply have given the delay increase in milliseconds a fancier >> name to solve that specific problem... >>> On Mar 12, 2023, at 20:31, Dave Taht via Rpm = wrote: >>> = https://www.reddit.com/r/HomeNetworking/comments/11pmc9a/comment/jbypj0z/?= context=3D3 >>> -- >>> Come Heckle Mar 6-9 at: https://www.understandinglatency.com/ >>> Dave T=C3=A4ht CEO, TekLibre, LLC >>> _______________________________________________ >>> Rpm mailing list >>> Rpm@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/rpm >> _______________________________________________ >> Rpm mailing list >> Rpm@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/rpm