[Rpm] Changes to RPM calculation for MacOS Ventura?

rjmcmahon rjmcmahon at rjmcmahon.com
Thu Nov 10 15:59:49 EST 2022


As an FYI, iperf 2 supports traffic testing with TCP_NOTSENT_LOWAT via 
the --tcp-write-prefetch size option. We tend to use small values, e.g. 
16K, to mitigate send side bloat. It mostly doesn't seem to impact 
throughput and does provide better latency so seems like a good idea to 
enable it on sockets regardless of interactivity or not. We haven't 
measured write() syscall performance though so are not sure about the 
tradeoffs there. Also, not sure the impacts when using linux's io_uring 
(iperf 2 does not use this https://en.wikipedia.org/wiki/Io_uring )

--tcp-write-prefetch n[kmKM]
Set TCP_NOTSENT_LOWAT on the socket and use event based writes per 
select() on the socket.

https://sourceforge.net/projects/iperf2/

Bob
> Hello Jonathan,
> 
>> On Nov 3, 2022, at 3:09 PM, jf at jonathanfoulkes.com wrote:
>> 
>> Hi Christoph,
>> 
>> Thanks for the reply, it clarifies why the metric would be different
>> but it then leads to questions regarding how / where bufferbloat is
>> occurring on the links creating the load. I noted the tc stats show
>> a peak delay of 81ms on download and 139ms on upload, so there is
>> indeed some queue build-up in the router.
> 
> Yes, there are multiple sources of bufferbloat. First, it happens on
> the bottleneck link. But then, it also happens in the sender’s
> TCP-stack (thus, the importance of TCP_NOTSENT_LOWAT). Add
> flow-queuing in the mix it gets even more tricky :-)
> 
> But, in the end, we want low latency not only on separate connections,
> but also on the connections that are carrying the high-throughput
> traffic. These days with H2/H3, connections are aggressively reused
> for transmitting the data, thus potentially mixing a bulk-transfer
> with a short latency-sensitive transfer on the same connection.
> 
>> So even testing cutting QoS to 50% of the upstream capacity and
>> still getting no better than medium RPM rates.
>> 
>> Here is the test run on that unit set for roughly half ( 80 / 10 )
>> of the upstream line ( 180 / 24 ), this one from a 12.6 OS
>> 
>> ==== SUMMARY ====
>> 
>> Upload capacity: 8.679 Mbps
>> Download capacity: 75.213 Mbps
>> Upload flows: 20
>> Download flows: 12
>> Upload Responsiveness: High (2659 RPM)
>> Download Responsiveness: High (2587 RPM)
>> Base RTT: 14
>> Start: 11/3/22, 4:05:01 PM
>> End: 11/3/22, 4:05:26 PM
>> OS Version: Version 12.6 (Build 21G115)
>> 
>> And this one from Ventura (13)
>> 
>> ==== SUMMARY ====
>> Uplink capacity: 9.328 Mbps (Accuracy: High)
>> Downlink capacity: 76.555 Mbps (Accuracy: High)
>> Uplink Responsiveness: Low (143 RPM) (Accuracy: High)
>> Downlink Responsiveness: Medium (380 RPM) (Accuracy: High)
>> Idle Latency: 29.000 milli-seconds (Accuracy: High)
>> Interface: en6
>> Uplink bytes transferred: 16.734 MB
>> Downlink bytes transferred: 85.637 MB
>> Uplink Flow count: 20
>> Downlink Flow count: 12
>> Start: 11/3/22, 4:03:33 PM
>> End: 11/3/22, 4:03:58 PM
>> OS Version: Version 13.0 (Build 22A380)
>> 
>> Does all-out use of ECN cause a penalty?
>> 
>> On download, we recorded 9504 marks, but only 38 drops. So the flows
>> should have been well managed with all the early feedback to
>> senders.
>> 
>> Do the metrics look for drops and thus this low drop rate seems like
>> there is bloat given the amount of traffic in flight?
> 
> No, we don’t look for drops.
> 
> Christoph
> 
>> The device under test is an MT7621 running stock OpenWRT 22.03.2
>> with SQM installed and using layer-cake. But we see similar metrics
>> on an i5-4200 x86 box with Intel NICs. So it’s not horsepower
>> related.
>> I just retested on the x86, with the same ballpark results.
>> 
>> I’ll re-test tomorrow with all the ECN features on the Mac and the
>> router disabled to see what that does to the metrics.
>> 
>> Thanks,
>> 
>> Jonathan
>> 
>> On Nov 1, 2022, at 5:52 PM, Christoph Paasch <cpaasch at apple.com>
>> wrote:
>> 
>> Hello Jonathan,
>> 
>> On Oct 28, 2022, at 2:45 PM, jf--- via Rpm
>> <rpm at lists.bufferbloat.net> wrote:
>> 
>> Hopefully, Christoph can provide some details on the changes from
>> the prior networkQuality test, as we’re seeing some pretty large
>> changes in results for the latest RPM tests.
>> 
>> Where before we’d see results in the >1,500 RPM (and multiple
>>> 2,000 RPM results) for a DOCSIS 3.1 line with QoS enabled (180
>> down/35 up), it now returns peak download RPM of ~600 and ~800 for
>> upload.
>> 
>> latest results:
>> 
>> ==== SUMMARY ====
>> Uplink capacity: 25.480 Mbps (Accuracy: High)
>> Downlink capacity: 137.768 Mbps (Accuracy: High)
>> Uplink Responsiveness: Medium (385 RPM) (Accuracy: High)
>> Downlink Responsiveness: Medium (376 RPM) (Accuracy: High)
>> Idle Latency: 43.875 milli-seconds (Accuracy: High)
>> Interface: en8
>> Uplink bytes transferred: 35.015 MB
>> Downlink bytes transferred: 154.649 MB
>> Uplink Flow count: 16
>> Downlink Flow count: 12
>> Start: 10/28/22, 5:12:30 PM
>> End: 10/28/22, 5:12:54 PM
>> OS Version: Version 13.0 (Build 22A380)
>> 
>> Latencies (as monitored via PingPlotter) stay absolutely steady
>> during these tests,
>> 
>> So unless my ISP coincidentally started having major service issues,
>> I’m scratching my head as to why.
>> 
>> For contrast, the Ookla result is as follows:
>> https://www.speedtest.net/result/13865976456  with 15ms down, 18ms
>> up loaded latencies.
>> 
>> In Ventura, we started adding the latency on the load-generating
>> connections to the final RPM-calulcation as well. The formula being
>> used is now exactly what is in the v01 IETF draft.
>> 
>> Very likely the bottleneck in your network does FQ, and so latency
>> on separate connections is very low, while your load-generating
>> connections are still bufferbloated.
>> 
>> Ookla measures latency only on separate connections, thus will also
>> be heavily impacted by FQ.
>> 
>> Does that clarify it?
>> 
>> Cheers,
>> Christoph
>> 
>> Further machine details: MacBook Pro 16” (2019) using a USB-C to
>> Ethernet adapter.
>> I run with full ECN enabled:
>> 
>> sudo sysctl -w net.inet.tcp.disable_tcp_heuristics=1
>> 
>> sudo sysctl -w net.inet.tcp.ecn_initiate_out=1
>> 
>> sudo sysctl -w net.inet.tcp.ecn_negotiate_in=1
>> and also with instant ack replies:
>> 
>> sysctl net.inet.tcp.delayed_ack
>> net.inet.tcp.delayed_ack: 0
>> 
>> I did try with delayed_ack=1, and the results were about the same.
>> 
>> Thanks in advance,
>> 
>> Jonathan Foulkes
>> 
>> _______________________________________________
>> Rpm mailing list
>> Rpm at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/rpm
> _______________________________________________
> Rpm mailing list
> Rpm at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm


More information about the Rpm mailing list