<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">Hi Christoph,<div><br></div><div>Thanks for the reply, it clarifies why the metric would be different but it then leads to questions regarding how / where bufferbloat is occurring on the links creating the load. I noted the tc stats show a peak delay of 81ms on download and 139ms on upload, so there is indeed some queue build-up in the router.</div><div><br></div><div>So even testing cutting QoS to 50% of the upstream capacity and still getting no better than medium RPM rates.</div><div><br></div><div>Here is the test run on that unit set for roughly half ( 80 / 10 ) of the upstream line ( 180 / 24 ), this one from a 12.6 OS</div><div><br></div><div><div>==== SUMMARY ==== </div><div>Upload capacity: 8.679 Mbps</div><div>Download capacity: 75.213 Mbps</div><div>Upload flows: 20</div><div>Download flows: 12</div><div>Upload Responsiveness: High (2659 RPM)</div><div>Download Responsiveness: High (2587 RPM)</div><div>Base RTT: 14</div><div>Start: 11/3/22, 4:05:01 PM</div><div>End: 11/3/22, 4:05:26 PM</div><div>OS Version: Version 12.6 (Build 21G115)</div><div><br></div><div>And this one from Ventura (13)</div><div><br></div><div><div>==== SUMMARY ====</div><div>Uplink capacity: 9.328 Mbps (Accuracy: High)</div><div>Downlink capacity: 76.555 Mbps (Accuracy: High)</div><div>Uplink Responsiveness: Low (143 RPM) (Accuracy: High)</div><div>Downlink Responsiveness: Medium (380 RPM) (Accuracy: High)</div><div>Idle Latency: 29.000 milli-seconds (Accuracy: High)</div><div>Interface: en6</div><div>Uplink bytes transferred: 16.734 MB</div><div>Downlink bytes transferred: 85.637 MB</div><div>Uplink Flow count: 20</div><div>Downlink Flow count: 12</div><div>Start: 11/3/22, 4:03:33 PM</div><div>End: 11/3/22, 4:03:58 PM</div><div>OS Version: Version 13.0 (Build 22A380)</div></div><div><br></div><div>Does all-out use of ECN cause a penalty?</div><div><br></div><div>On download, we recorded 9504 marks, but only 38 drops. So the flows should have been well managed with all the early feedback to senders. </div><div><br></div><div>Do the metrics look for drops and thus this low drop rate seems like there is bloat given the amount of traffic in flight?</div><div><br></div><div>The device under test is an MT7621 running stock OpenWRT 22.03.2 with SQM installed and using layer-cake. But we see similar metrics on an i5-4200 x86 box with Intel NICs. So it’s not horsepower related.</div><div>I just retested on the x86, with the same ballpark results.</div><div><br></div><div>I’ll re-test tomorrow with all the ECN features on the Mac and the router disabled to see what that does to the metrics.</div><div><br></div><div>Thanks,</div><div><br></div><div>Jonathan</div><div><br></div><div><br></div><div><br><blockquote type="cite"><div>On Nov 1, 2022, at 5:52 PM, Christoph Paasch <cpaasch@apple.com> wrote:</div><br class="Apple-interchange-newline"><div><meta http-equiv="content-type" content="text/html; charset=utf-8"><div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">Hello Jonathan,<br><div><br><blockquote type="cite"><div>On Oct 28, 2022, at 2:45 PM, jf--- via Rpm <rpm@lists.bufferbloat.net> wrote:</div><br class="Apple-interchange-newline"><div><meta http-equiv="content-type" content="text/html; charset=utf-8"><div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">Hopefully, Christoph can provide some details on the changes from the prior networkQuality test, as we’re seeing some pretty large changes in results for the latest RPM tests.<div><br></div><div>Where before we’d see results in the >1,500 RPM (and multiple >2,000 RPM results) for a DOCSIS 3.1 line with QoS enabled (180 down/35 up), it now returns peak download RPM of ~600 and ~800 for upload.</div><div><br></div><div>latest results:</div><div><br></div><div><div>==== SUMMARY ====</div><div>Uplink capacity: 25.480 Mbps (Accuracy: High)</div><div>Downlink capacity: 137.768 Mbps (Accuracy: High)</div><div>Uplink Responsiveness: Medium (385 RPM) (Accuracy: High)</div><div>Downlink Responsiveness: Medium (376 RPM) (Accuracy: High)</div><div>Idle Latency: 43.875 milli-seconds (Accuracy: High)</div><div>Interface: en8</div><div>Uplink bytes transferred: 35.015 MB</div><div>Downlink bytes transferred: 154.649 MB</div><div>Uplink Flow count: 16</div><div>Downlink Flow count: 12</div><div>Start: 10/28/22, 5:12:30 PM</div><div>End: 10/28/22, 5:12:54 PM</div><div>OS Version: Version 13.0 (Build 22A380)</div></div><div><br></div><div>Latencies (as monitored via PingPlotter) stay absolutely steady during these tests,</div><div><br></div><div>So unless my ISP coincidentally started having major service issues, I’m scratching my head as to why.</div><div><br></div><div>For contrast, the Ookla result is as follows: <a href="https://www.speedtest.net/result/13865976456">https://www.speedtest.net/result/13865976456</a> with 15ms down, 18ms up loaded latencies.</div></div></div></blockquote><div><br></div><div>In Ventura, we started adding the latency on the load-generating connections to the final RPM-calulcation as well. The formula being used is now exactly what is in the v01 IETF draft.</div><div><br></div><div>Very likely the bottleneck in your network does FQ, and so latency on separate connections is very low, while your load-generating connections are still bufferbloated.</div><div><br></div><div><br></div><div>Ookla measures latency only on separate connections, thus will also be heavily impacted by FQ.</div><div><br></div><div><br></div><div>Does that clarify it?</div><div><br></div><div><br></div><div>Cheers,</div><div>Christoph </div><div><br></div><br><blockquote type="cite"><div><div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div><br></div><div>Further machine details: MacBook Pro 16” (2019) using a USB-C to Ethernet adapter.</div><div>I run with full ECN enabled:</div><div><div class="page" title="Page 1"><div class="layoutArea"><div class="column"><p><span style="font-size: 11pt; font-family: Calibri; color: rgb(68, 114, 196);">sudo sysctl -w net.inet.tcp.disable_tcp_heuristics=1 </span></p><p><span style="color: rgb(68, 114, 196); font-family: Calibri; font-size: 11pt;">sudo sysctl -w net.inet.tcp.ecn_initiate_out=1</span></p><p><span style="font-size: 11pt; font-family: Calibri; color: rgb(68, 114, 196);">sudo sysctl -w net.inet.tcp.ecn_negotiate_in=1</span></p></div></div></div></div><div>and also with instant ack replies:</div><div><br></div><div><div>sysctl net.inet.tcp.delayed_ack</div><div>net.inet.tcp.delayed_ack: 0</div></div><div><br></div><div>I did try with delayed_ack=1, and the results were about the same.</div><div><br></div><div>Thanks in advance,</div><div><br></div><div>Jonathan Foulkes</div><div><br></div></div>_______________________________________________<br>Rpm mailing list<br>Rpm@lists.bufferbloat.net<br>https://lists.bufferbloat.net/listinfo/rpm<br></div></blockquote></div><br></div></div></blockquote></div><br></div></body></html>