[Cerowrt-devel] Better results from CeroWrt 3.10.28-16
Dave Taht
dave.taht at gmail.com
Sun Mar 2 14:19:17 PST 2014
fq_codel is fq_codel from linux mainline, usually configured with a
300 byte quantum.
codel is linux codel
ns2_codel is derived from a smoother decay curve than codel from
patches posted on kathie's website.
nfq_codel is derived from ns2_codel, with an enhancement that rotates
the flow list on every packet,
thus behaving more like sfq_codel.
In at least one variant of cero a month or two back, I eliminated the
maxpacket check in nfq_codel
for which my hope was to not have to fiddle with the target. It helped
somewhat but also led to more
loss when I felt it shouldn't. I think I have a gentler version of
that but haven't implemented it. I would
really like to not have to fiddle with the target... but am growing
resigned to the idea that a real
solution incorporates the rate shaper directly into the Xfq_codel
algorithm so as to autotune it,
and perhaps reduce the number of flows under control.
(The original version of fq_codel (linux 3.5) suffered from the
horizontal standing queue problem,
where each flow at low speeds added more latency. (but low rate flows
were never dropped). I still
struggle with this tradeoff)
One thing to note in rich's graphs is flow preservation is better at
the lower speeds - you have more
of the measurement flows survive longer. This is something you'll also
see in nfq_codel.
I think rich's upstream and measurement stream results will be better
with target 20ms on egress. They will still plot -p all_scaled
badly, and to get a decent plot would require reprocessing with the -p
totals plot type.
On Sun, Mar 2, 2014 at 1:51 PM, Aaron Wood <woody77 at gmail.com> wrote:
> Is there a writeup on each of the fq_codel variants?
>
> -Aaron
>
> Sent from my iPhone
>
>> On Mar 2, 2014, at 22:41, Dave Taht <dave.taht at gmail.com> wrote:
>>
>> Nice work!
>>
>> I have a problem in that I can't remember if target autotuning made it
>> into that release or not.
>>
>> Coulde you do a tc -s qdisc show dev ge00 on your favorite of the
>> above and paste? I still think
>> target is still too low on the egress side with the current calculation.
>>
>> Secondly, now that you have a setting you like, trying pie, codel, and
>> ns2_codel also would be interesting.
>>
>> efq_codel is currently uninteresting. Wasn't clear if you were using
>> nfq_codel or fq_codel throughout.
>>
>>
>>> On Sun, Mar 2, 2014 at 1:18 PM, Rich Brown <richb.hanover at gmail.com> wrote:
>>> I took some time this weekend, and ran careful speed and latency tests on the CeroWrt 3.10.28-16 build. I have a much better understanding of how all this works, both in theory and in practice. Here's an executive summary of the overall test procedure with lots of details below.
>>>
>>> Adjusting CeroWrt's configured up- and download rates in the SQM page affects both the actual data transfer rates as well as the latency. If you set the values too low, CeroWrt will enforce that bottleneck, and the transfer rates will be lower than you could attain on your link. If you configure them too high, though, the transfer rates may look better, but the latency can go off the charts. Here's how I arrived at a good balance.
>>>
>>> Test Conditions:
>>>
>>> - Running tests from my MacBook Pro, 10.9.2.
>>> - Wi-Fi off; ethernet cable direct to Netgear WNDR3700v2 with CeroWrt 3.10.28-16.
>>> - DSL service from Fairpoint, nominally "7 Mbps down/768kbps up".
>>> - DSL Modem sync rate (the actual rate that bits enter/leave my house) is 7616kbps down; 864kbps up. The line is apparently fairly clean, too.
>>> - Base ping time to the nearest router at ISP (via traceroute) is 29-30 msec.
>>> - To minimize other traffic, I turned off most of the computers at home, and also quit my mail client (which is surprisingly chatty).
>>>
>>> The Tests:
>>>
>>> I ran two different tests: netperf-wrapper with the RRUL test, and speedtest.net. These give very different views of performance. RRUL really stresses the line using multiple simultaneous up and download streams. Speedtest.net is a consumer test that only tests one direction at a time, and for a short time. We want to look good with both.
>>>
>>> For the RRUL tests, I invoked netperf-wrapper like this: netperf-wrapper rrul -p all_scaled -l 60 -H atl.richb-hanover.com -t text-shown-in-chart
>>> For the Speedtest.net tests, I used their web GUI in the obvious way.
>>>
>>> For both tests, I used a script (pingstats.sh, see my next message) to collect the ping times and give min, max, average, median, and 10th and 90th percentile readings.
>>>
>>> Test Procedure:
>>>
>>> I ran a series of tests starting with the up/down link rates spelled out by Sebastian Moeller's amazingly detailed note last week. See https://lists.bufferbloat.net/pipermail/cerowrt-devel/2014-February/002375.html Read it carefully. There's a lot of insight available there.
>>>
>>> The initial configuration was 6089/737 down/up, with the (nearly default) values for Queue Discipline (nfq_codel, simple.qos, ECN on for ingress; NOECN for egress, auto for both ingress and egress latency targets), and ATM link layer with 44 bytes of overhead.
>>>
>>> With those initial configuration values, latency was good but the speeds were disappointing. I then re-ran the tests with CeroWrt configured for higher up/down link speeds to see where things broke.
>>>
>>> Things got better and better with increasing link rates until I hit 7600/850 - at that point, latency began to get quite large. (Of course, with SQM disabled, the latency got dreadful.)
>>>
>>> There was an anomaly at 7000/800 kbps. The 90th percentile and max numbers jumped up quite a lot, but went *down* for the next test in the sequence when I increased the upload speed to 7000/830. I ran the experiment twice to confirm that behavior.
>>>
>>> I should also note that in the course of the experiment, I re-ran many of these tests. Although I did not document each of the runs, the results (speedtest.net rates and the pingstats.sh values) were quite consistent and repeatable.
>>>
>>> Conclusion:
>>>
>>> I'm running with CeroWrt 3.10.28-16 configured for down/up 7000/830, (nearly) default Queue Discipline and ATM+44 bytes of overhead. With these configurations, latency is well in hand and my network is pretty speedy.
>>>
>>> We need to figure out how to explain to people what to expect re: the tradeoff between "faster speeds" that show up in Speedtest.net (with accompanying crappy performance) and slightly slower speeds with a *way* better experience.
>>>
>>> The data follows...
>>>
>>> Rich
>>>
>>> ================================================================
>>>
>>> RRUL Tests: The charts associated with these RRUL runs are all available at http://richb-hanover.com/rrul-tests-cerowrt-3-10-28-16/
>>>
>>> 6089/737:
>>> Min: 28.936 10pct: 29.094 Avg: 40.529 Median: 37.961 90pct: 52.636 Max: 77.171 Num pings: 77
>>>
>>> 6200/750:
>>> Min: 28.715 10pct: 29.298 Avg: 41.805 Median: 39.826 90pct: 57.414 Max: 72.363 Num pings: 77
>>>
>>> 6400/800:
>>> Min: 28.706 10pct: 29.119 Avg: 39.598 Median: 38.428 90pct: 52.351 Max: 69.492 Num pings: 78
>>>
>>> 6600/830:
>>> Min: 28.485 10pct: 29.114 Avg: 41.708 Median: 39.753 90pct: 57.552 Max: 87.328 Num pings: 77
>>>
>>> 7000/800:
>>> Min: 28.570 10pct: 29.180 Avg: 46.245 Median: 42.684 90pct: 62.376 Max: 169.991 Num pings: 77
>>> Min: 28.775 10pct: 29.226 Avg: 43.628 Median: 40.446 90pct: 60.216 Max: 121.334 Num pings: 76 (2nd run)
>>>
>>> 7000/830:
>>> Min: 28.942 10pct: 29.285 Avg: 44.283 Median: 45.318 90pct: 58.002 Max: 85.035 Num pings: 78
>>> Min: 28.951 10pct: 29.479 Avg: 43.182 Median: 41.000 90pct: 57.570 Max: 74.964 Num pings: 76 (2nd run)
>>>
>>> 7600/850:
>>> Min: 28.756 10pct: 29.078 Avg: 55.426 Median: 46.063 90pct: 81.847 Max: 277.807 Num pings: 84
>>>
>>> SQM Disabled:
>>> Min: 28.665 10pct: 29.062 Avg: 1802.521 Median: 2051.276 90pct: 2762.941 Max: 4217.644 Num pings: 78
>>>
>>> ================================================================
>>>
>>> Speedtest.net: First values are the reported down/up rates in the Speedtest GUI
>>>
>>> 6089/737:
>>> 5.00/0.58
>>> Min: 28.709 10pct: 28.935 Avg: 33.416 Median: 31.619 90pct: 38.608 Max: 49.193 Num pings: 45
>>>
>>> 6200/750:
>>> 5.08/0.58
>>> Min: 28.759 10pct: 29.055 Avg: 33.974 Median: 32.584 90pct: 41.938 Max: 46.605 Num pings: 44
>>>
>>> 6400/800:
>>> 5.24/0.60
>>> Min: 28.447 10pct: 28.826 Avg: 34.675 Median: 31.155 90pct: 41.285 Max: 81.503 Num pings: 43
>>>
>>> 6600/830:
>>> 5.41/0.65
>>> Min: 28.868 10pct: 29.053 Avg: 35.158 Median: 32.928 90pct: 44.099 Max: 51.571 Num pings: 44
>>>
>>> 7000/800:
>>> 5.73/0.62
>>> Min: 28.359 10pct: 28.841 Avg: 35.205 Median: 33.620 90pct: 43.735 Max: 54.812 Num pings: 44
>>>
>>> 7000/830:
>>> 5.74/0.65 (5.71/0.62 second run)
>>> Min: 28.605 10pct: 29.055 Avg: 34.945 Median: 31.773 90pct: 42.645 Max: 54.077 Num pings: 44
>>> Min: 28.649 10pct: 28.820 Avg: 34.866 Median: 32.398 90pct: 43.533 Max: 69.288 Num pings: 56 (2nd run)
>>>
>>> 7600/850:
>>> 6.20/0.67
>>> Min: 28.835 10pct: 28.963 Avg: 36.253 Median: 34.912 90pct: 44.659 Max: 54.023 Num pings: 48
>>>
>>> SQM Disabled:
>>> 6.46/0.73
>>> Min: 28.452 10pct: 28.872 Avg: 303.754 Median: 173.498 90pct: 499.678 Max: 1799.814 Num pings: 45
>>> _______________________________________________
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>>
>>
>> --
>> Dave Täht
>>
>> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
More information about the Cerowrt-devel
mailing list