[Cerowrt-devel] Better results from CeroWrt 3.10.28-16
Rich Brown
richb.hanover at gmail.com
Sun Mar 2 17:27:44 PST 2014
On Mar 2, 2014, at 4:41 PM, Dave Taht <dave.taht at gmail.com> wrote:
> Nice work!
>
> I have a problem in that I can't remember if target autotuning made it
> into that release or not.
>
> Coulde you do a tc -s qdisc show dev ge00 on your favorite of the
> above and paste? I still think
> target is still too low on the egress side with the current calculation.
Pasted below. The 14.8 msec is the result of the calculation that comes from the current adapt_target_to_slow_link() function for 830kbps.
> Secondly, now that you have a setting you like, trying pie, codel, and
> ns2_codel also would be interesting.
Need to find another available time slot, but sure.
> efq_codel is currently uninteresting. Wasn't clear if you were using
> nfq_codel or fq_codel throughout.
nfq_codel used for all tests.
Rich
root at cerowrt:~# tc -s qdisc show dev ge00
qdisc htb 1: root refcnt 2 r2q 10 default 12 direct_packets_stat 0
Sent 122649261 bytes 332175 pkt (dropped 0, overlimits 579184 requeues 0)
backlog 0b 0p requeues 0
qdisc nfq_codel 110: parent 1:11 limit 1001p flows 1024 quantum 300 target 14.8ms interval 109.8ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc nfq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 300 target 14.8ms interval 109.8ms
Sent 122649261 bytes 332175 pkt (dropped 129389, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 37646 ecn_mark 0
new_flows_len 1 old_flows_len 2
qdisc nfq_codel 130: parent 1:13 limit 1001p flows 1024 quantum 300 target 14.8ms interval 109.8ms
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc ingress ffff: parent ffff:fff1 ----------------
Sent 278307120 bytes 467944 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
>
>
> On Sun, Mar 2, 2014 at 1:18 PM, Rich Brown <richb.hanover at gmail.com> wrote:
>> I took some time this weekend, and ran careful speed and latency tests on the CeroWrt 3.10.28-16 build. I have a much better understanding of how all this works, both in theory and in practice. Here's an executive summary of the overall test procedure with lots of details below.
>>
>> Adjusting CeroWrt's configured up- and download rates in the SQM page affects both the actual data transfer rates as well as the latency. If you set the values too low, CeroWrt will enforce that bottleneck, and the transfer rates will be lower than you could attain on your link. If you configure them too high, though, the transfer rates may look better, but the latency can go off the charts. Here's how I arrived at a good balance.
>>
>> Test Conditions:
>>
>> - Running tests from my MacBook Pro, 10.9.2.
>> - Wi-Fi off; ethernet cable direct to Netgear WNDR3700v2 with CeroWrt 3.10.28-16.
>> - DSL service from Fairpoint, nominally "7 Mbps down/768kbps up".
>> - DSL Modem sync rate (the actual rate that bits enter/leave my house) is 7616kbps down; 864kbps up. The line is apparently fairly clean, too.
>> - Base ping time to the nearest router at ISP (via traceroute) is 29-30 msec.
>> - To minimize other traffic, I turned off most of the computers at home, and also quit my mail client (which is surprisingly chatty).
>>
>> The Tests:
>>
>> I ran two different tests: netperf-wrapper with the RRUL test, and speedtest.net. These give very different views of performance. RRUL really stresses the line using multiple simultaneous up and download streams. Speedtest.net is a consumer test that only tests one direction at a time, and for a short time. We want to look good with both.
>>
>> For the RRUL tests, I invoked netperf-wrapper like this: netperf-wrapper rrul -p all_scaled -l 60 -H atl.richb-hanover.com -t text-shown-in-chart
>> For the Speedtest.net tests, I used their web GUI in the obvious way.
>>
>> For both tests, I used a script (pingstats.sh, see my next message) to collect the ping times and give min, max, average, median, and 10th and 90th percentile readings.
>>
>> Test Procedure:
>>
>> I ran a series of tests starting with the up/down link rates spelled out by Sebastian Moeller's amazingly detailed note last week. See https://lists.bufferbloat.net/pipermail/cerowrt-devel/2014-February/002375.html Read it carefully. There's a lot of insight available there.
>>
>> The initial configuration was 6089/737 down/up, with the (nearly default) values for Queue Discipline (nfq_codel, simple.qos, ECN on for ingress; NOECN for egress, auto for both ingress and egress latency targets), and ATM link layer with 44 bytes of overhead.
>>
>> With those initial configuration values, latency was good but the speeds were disappointing. I then re-ran the tests with CeroWrt configured for higher up/down link speeds to see where things broke.
>>
>> Things got better and better with increasing link rates until I hit 7600/850 - at that point, latency began to get quite large. (Of course, with SQM disabled, the latency got dreadful.)
>>
>> There was an anomaly at 7000/800 kbps. The 90th percentile and max numbers jumped up quite a lot, but went *down* for the next test in the sequence when I increased the upload speed to 7000/830. I ran the experiment twice to confirm that behavior.
>>
>> I should also note that in the course of the experiment, I re-ran many of these tests. Although I did not document each of the runs, the results (speedtest.net rates and the pingstats.sh values) were quite consistent and repeatable.
>>
>> Conclusion:
>>
>> I'm running with CeroWrt 3.10.28-16 configured for down/up 7000/830, (nearly) default Queue Discipline and ATM+44 bytes of overhead. With these configurations, latency is well in hand and my network is pretty speedy.
>>
>> We need to figure out how to explain to people what to expect re: the tradeoff between "faster speeds" that show up in Speedtest.net (with accompanying crappy performance) and slightly slower speeds with a *way* better experience.
>>
>> The data follows...
>>
>> Rich
>>
>> ================================================================
>>
>> RRUL Tests: The charts associated with these RRUL runs are all available at http://richb-hanover.com/rrul-tests-cerowrt-3-10-28-16/
>>
>> 6089/737:
>> Min: 28.936 10pct: 29.094 Avg: 40.529 Median: 37.961 90pct: 52.636 Max: 77.171 Num pings: 77
>>
>> 6200/750:
>> Min: 28.715 10pct: 29.298 Avg: 41.805 Median: 39.826 90pct: 57.414 Max: 72.363 Num pings: 77
>>
>> 6400/800:
>> Min: 28.706 10pct: 29.119 Avg: 39.598 Median: 38.428 90pct: 52.351 Max: 69.492 Num pings: 78
>>
>> 6600/830:
>> Min: 28.485 10pct: 29.114 Avg: 41.708 Median: 39.753 90pct: 57.552 Max: 87.328 Num pings: 77
>>
>> 7000/800:
>> Min: 28.570 10pct: 29.180 Avg: 46.245 Median: 42.684 90pct: 62.376 Max: 169.991 Num pings: 77
>> Min: 28.775 10pct: 29.226 Avg: 43.628 Median: 40.446 90pct: 60.216 Max: 121.334 Num pings: 76 (2nd run)
>>
>> 7000/830:
>> Min: 28.942 10pct: 29.285 Avg: 44.283 Median: 45.318 90pct: 58.002 Max: 85.035 Num pings: 78
>> Min: 28.951 10pct: 29.479 Avg: 43.182 Median: 41.000 90pct: 57.570 Max: 74.964 Num pings: 76 (2nd run)
>>
>> 7600/850:
>> Min: 28.756 10pct: 29.078 Avg: 55.426 Median: 46.063 90pct: 81.847 Max: 277.807 Num pings: 84
>>
>> SQM Disabled:
>> Min: 28.665 10pct: 29.062 Avg: 1802.521 Median: 2051.276 90pct: 2762.941 Max: 4217.644 Num pings: 78
>>
>> ================================================================
>>
>> Speedtest.net: First values are the reported down/up rates in the Speedtest GUI
>>
>> 6089/737:
>> 5.00/0.58
>> Min: 28.709 10pct: 28.935 Avg: 33.416 Median: 31.619 90pct: 38.608 Max: 49.193 Num pings: 45
>>
>> 6200/750:
>> 5.08/0.58
>> Min: 28.759 10pct: 29.055 Avg: 33.974 Median: 32.584 90pct: 41.938 Max: 46.605 Num pings: 44
>>
>> 6400/800:
>> 5.24/0.60
>> Min: 28.447 10pct: 28.826 Avg: 34.675 Median: 31.155 90pct: 41.285 Max: 81.503 Num pings: 43
>>
>> 6600/830:
>> 5.41/0.65
>> Min: 28.868 10pct: 29.053 Avg: 35.158 Median: 32.928 90pct: 44.099 Max: 51.571 Num pings: 44
>>
>> 7000/800:
>> 5.73/0.62
>> Min: 28.359 10pct: 28.841 Avg: 35.205 Median: 33.620 90pct: 43.735 Max: 54.812 Num pings: 44
>>
>> 7000/830:
>> 5.74/0.65 (5.71/0.62 second run)
>> Min: 28.605 10pct: 29.055 Avg: 34.945 Median: 31.773 90pct: 42.645 Max: 54.077 Num pings: 44
>> Min: 28.649 10pct: 28.820 Avg: 34.866 Median: 32.398 90pct: 43.533 Max: 69.288 Num pings: 56 (2nd run)
>>
>> 7600/850:
>> 6.20/0.67
>> Min: 28.835 10pct: 28.963 Avg: 36.253 Median: 34.912 90pct: 44.659 Max: 54.023 Num pings: 48
>>
>> SQM Disabled:
>> 6.46/0.73
>> Min: 28.452 10pct: 28.872 Avg: 303.754 Median: 173.498 90pct: 499.678 Max: 1799.814 Num pings: 45
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
>
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
More information about the Cerowrt-devel
mailing list