Hello, > On Nov 1, 2022, at 3:09 PM, Sebastian Moeller wrote: > > Hi Christoph, > > On 1 November 2022 22:52:21 CET, Christoph Paasch via Rpm > wrote: >> Hello Jonathan, >> >>> On Oct 28, 2022, at 2:45 PM, jf--- via Rpm wrote: >>> >>> Hopefully, Christoph can provide some details on the changes from the prior networkQuality test, as we’re seeing some pretty large changes in results for the latest RPM tests. >>> >>> Where before we’d see results in the >1,500 RPM (and multiple >2,000 RPM results) for a DOCSIS 3.1 line with QoS enabled (180 down/35 up), it now returns peak download RPM of ~600 and ~800 for upload. >>> >>> latest results: >>> >>> ==== SUMMARY ==== >>> Uplink capacity: 25.480 Mbps (Accuracy: High) >>> Downlink capacity: 137.768 Mbps (Accuracy: High) >>> Uplink Responsiveness: Medium (385 RPM) (Accuracy: High) >>> Downlink Responsiveness: Medium (376 RPM) (Accuracy: High) >>> Idle Latency: 43.875 milli-seconds (Accuracy: High) >>> Interface: en8 >>> Uplink bytes transferred: 35.015 MB >>> Downlink bytes transferred: 154.649 MB >>> Uplink Flow count: 16 >>> Downlink Flow count: 12 >>> Start: 10/28/22, 5:12:30 PM >>> End: 10/28/22, 5:12:54 PM >>> OS Version: Version 13.0 (Build 22A380) >>> >>> Latencies (as monitored via PingPlotter) stay absolutely steady during these tests, >>> >>> So unless my ISP coincidentally started having major service issues, I’m scratching my head as to why. >>> >>> For contrast, the Ookla result is as follows: https://www.speedtest.net/result/13865976456 with 15ms down, 18ms up loaded latencies. >> >> In Ventura, we started adding the latency on the load-generating connections to the final RPM-calulcation as well. The formula being used is now exactly what is in the v01 IETF draft. > > [SM] I have been wondering quietly before whether reporting both inter- and intra-load-bearing flow responsiveness would not be a cool option for verbose mode? Both IMHO are giving relevant information about a link's usability under working conditions. Yes, that’s a good suggestion! We could expose this in the verbose mode. Christoph > > Regards > Sebastian > > >> >> Very likely the bottleneck in your network does FQ, and so latency on separate connections is very low, while your load-generating connections are still bufferbloated. >> >> >> Ookla measures latency only on separate connections, thus will also be heavily impacted by FQ. >> >> >> Does that clarify it? >> >> >> Cheers, >> Christoph >> >> >>> >>> Further machine details: MacBook Pro 16” (2019) using a USB-C to Ethernet adapter. >>> I run with full ECN enabled: >>> sudo sysctl -w net.inet.tcp.disable_tcp_heuristics=1 >>> >>> sudo sysctl -w net.inet.tcp.ecn_initiate_out=1 >>> >>> sudo sysctl -w net.inet.tcp.ecn_negotiate_in=1 >>> >>> and also with instant ack replies: >>> >>> sysctl net.inet.tcp.delayed_ack >>> net.inet.tcp.delayed_ack: 0 >>> >>> I did try with delayed_ack=1, and the results were about the same. >>> >>> Thanks in advance, >>> >>> Jonathan Foulkes >>> >>> _______________________________________________ >>> Rpm mailing list >>> Rpm@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/rpm >> > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity.