From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-x229.google.com (mail-we0-x229.google.com [IPv6:2a00:1450:400c:c03::229]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 88E4C21F2DD for ; Sun, 2 Mar 2014 17:43:19 -0800 (PST) Received: by mail-we0-f169.google.com with SMTP id w62so653122wes.0 for ; Sun, 02 Mar 2014 17:43:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=FwZ0raOjXb7VSh5Xcp6FkU1Qer/CZA0yZowSam1HFfk=; b=eLvsIizsUzdSxwYJdHYD8hNShOYTti1DADV5qo2hhNiQiR8cgdFmn51ueZwKoxr3Tg XgIqllVNUeU1xblgK5swDxtJWvFQy8UoZUJpZu+F4oblfJ2D+LmgZZ+kSA3X0Vpj8spd jFRcNgXU3XKWHBXDs7QywNcmCtdW6Z5CTA5oARE2ka/rI4j9CQyzarYY0RqryKRNJMN+ qbFpnp3A29FXH+lf6X1Mxw/YQyXbH0b2ZdArMq6A4797xBHq5Xn9yMhvLAi6LKGSlVI+ UooT3KMtCpNE77VHwbLaKPhEyHbNWNlWjgZVaCyYnYmtGEVftJrowtX8Ci7u+N5QWCNx cpYw== MIME-Version: 1.0 X-Received: by 10.194.143.40 with SMTP id sb8mr13247270wjb.15.1393810997414; Sun, 02 Mar 2014 17:43:17 -0800 (PST) Received: by 10.216.8.1 with HTTP; Sun, 2 Mar 2014 17:43:17 -0800 (PST) In-Reply-To: References: <5EC471C4-64B9-4D83-AB78-5219E2090886@gmail.com> <78561C59-590A-421E-A92D-7B93B084A6AA@gmail.com> Date: Sun, 2 Mar 2014 17:43:17 -0800 Message-ID: From: Dave Taht To: Rich Brown Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: cerowrt-devel Subject: Re: [Cerowrt-devel] Better results from CeroWrt 3.10.28-16 X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Mar 2014 01:43:20 -0000 I wouldn't mind a wireshark packet capture of a target 14.8ms and a target 30ms test. On Sun, Mar 2, 2014 at 5:41 PM, Dave Taht wrote: > Target 30ms would be interesting. 30+ % packet loss on this test (and > your link is still operable!) > > At your speed range the behavior of the tcp upload is governed by the > initial window size more than anything else. I doubt the > mac adjusts that properly so it keeps hammering away at it... my > concern from looking at your upload graphs was that you were > actually getting hammered so hard you were actually going through RTO > (receive timeout) on the uploads. > > On Sun, Mar 2, 2014 at 5:27 PM, Rich Brown wrot= e: >> >> On Mar 2, 2014, at 4:41 PM, Dave Taht wrote: >> >>> Nice work! >>> >>> I have a problem in that I can't remember if target autotuning made it >>> into that release or not. >>> >>> Coulde you do a tc -s qdisc show dev ge00 on your favorite of the >>> above and paste? I still think >>> target is still too low on the egress side with the current calculation= . >> >> Pasted below. The 14.8 msec is the result of the calculation that comes = from the current adapt_target_to_slow_link() function for 830kbps. >> >>> Secondly, now that you have a setting you like, trying pie, codel, and >>> ns2_codel also would be interesting. >> >> Need to find another available time slot, but sure. >> >>> efq_codel is currently uninteresting. Wasn't clear if you were using >>> nfq_codel or fq_codel throughout. >> >> nfq_codel used for all tests. >> >> Rich >> >> root@cerowrt:~# tc -s qdisc show dev ge00 >> qdisc htb 1: root refcnt 2 r2q 10 default 12 direct_packets_stat 0 >> Sent 122649261 bytes 332175 pkt (dropped 0, overlimits 579184 requeues = 0) >> backlog 0b 0p requeues 0 >> qdisc nfq_codel 110: parent 1:11 limit 1001p flows 1024 quantum 300 targ= et 14.8ms interval 109.8ms >> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) >> backlog 0b 0p requeues 0 >> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 >> new_flows_len 0 old_flows_len 0 >> qdisc nfq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 300 targ= et 14.8ms interval 109.8ms >> Sent 122649261 bytes 332175 pkt (dropped 129389, overlimits 0 requeues = 0) >> backlog 0b 0p requeues 0 >> maxpacket 0 drop_overlimit 0 new_flow_count 37646 ecn_mark 0 >> new_flows_len 1 old_flows_len 2 >> qdisc nfq_codel 130: parent 1:13 limit 1001p flows 1024 quantum 300 targ= et 14.8ms interval 109.8ms >> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) >> backlog 0b 0p requeues 0 >> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 >> new_flows_len 0 old_flows_len 0 >> qdisc ingress ffff: parent ffff:fff1 ---------------- >> Sent 278307120 bytes 467944 pkt (dropped 0, overlimits 0 requeues 0) >> backlog 0b 0p requeues 0 >>> >>> >>> On Sun, Mar 2, 2014 at 1:18 PM, Rich Brown wr= ote: >>>> I took some time this weekend, and ran careful speed and latency tests= on the CeroWrt 3.10.28-16 build. I have a much better understanding of how= all this works, both in theory and in practice. Here's an executive summar= y of the overall test procedure with lots of details below. >>>> >>>> Adjusting CeroWrt's configured up- and download rates in the SQM page = affects both the actual data transfer rates as well as the latency. If you = set the values too low, CeroWrt will enforce that bottleneck, and the trans= fer rates will be lower than you could attain on your link. If you configur= e them too high, though, the transfer rates may look better, but the latenc= y can go off the charts. Here's how I arrived at a good balance. >>>> >>>> Test Conditions: >>>> >>>> - Running tests from my MacBook Pro, 10.9.2. >>>> - Wi-Fi off; ethernet cable direct to Netgear WNDR3700v2 with CeroWrt = 3.10.28-16. >>>> - DSL service from Fairpoint, nominally "7 Mbps down/768kbps up". >>>> - DSL Modem sync rate (the actual rate that bits enter/leave my house)= is 7616kbps down; 864kbps up. The line is apparently fairly clean, too. >>>> - Base ping time to the nearest router at ISP (via traceroute) is 29-3= 0 msec. >>>> - To minimize other traffic, I turned off most of the computers at hom= e, and also quit my mail client (which is surprisingly chatty). >>>> >>>> The Tests: >>>> >>>> I ran two different tests: netperf-wrapper with the RRUL test, and spe= edtest.net. These give very different views of performance. RRUL really str= esses the line using multiple simultaneous up and download streams. Speedte= st.net is a consumer test that only tests one direction at a time, and for = a short time. We want to look good with both. >>>> >>>> For the RRUL tests, I invoked netperf-wrapper like this: netperf-wrapp= er rrul -p all_scaled -l 60 -H atl.richb-hanover.com -t text-shown-in-chart >>>> For the Speedtest.net tests, I used their web GUI in the obvious way. >>>> >>>> For both tests, I used a script (pingstats.sh, see my next message) to= collect the ping times and give min, max, average, median, and 10th and 90= th percentile readings. >>>> >>>> Test Procedure: >>>> >>>> I ran a series of tests starting with the up/down link rates spelled o= ut by Sebastian Moeller's amazingly detailed note last week. See https://li= sts.bufferbloat.net/pipermail/cerowrt-devel/2014-February/002375.html Read = it carefully. There's a lot of insight available there. >>>> >>>> The initial configuration was 6089/737 down/up, with the (nearly defau= lt) values for Queue Discipline (nfq_codel, simple.qos, ECN on for ingress;= NOECN for egress, auto for both ingress and egress latency targets), and A= TM link layer with 44 bytes of overhead. >>>> >>>> With those initial configuration values, latency was good but the spee= ds were disappointing. I then re-ran the tests with CeroWrt configured for = higher up/down link speeds to see where things broke. >>>> >>>> Things got better and better with increasing link rates until I hit 76= 00/850 - at that point, latency began to get quite large. (Of course, with = SQM disabled, the latency got dreadful.) >>>> >>>> There was an anomaly at 7000/800 kbps. The 90th percentile and max num= bers jumped up quite a lot, but went *down* for the next test in the sequen= ce when I increased the upload speed to 7000/830. I ran the experiment twic= e to confirm that behavior. >>>> >>>> I should also note that in the course of the experiment, I re-ran many= of these tests. Although I did not document each of the runs, the results = (speedtest.net rates and the pingstats.sh values) were quite consistent and= repeatable. >>>> >>>> Conclusion: >>>> >>>> I'm running with CeroWrt 3.10.28-16 configured for down/up 7000/830, (= nearly) default Queue Discipline and ATM+44 bytes of overhead. With these c= onfigurations, latency is well in hand and my network is pretty speedy. >>>> >>>> We need to figure out how to explain to people what to expect re: the = tradeoff between "faster speeds" that show up in Speedtest.net (with accomp= anying crappy performance) and slightly slower speeds with a *way* better e= xperience. >>>> >>>> The data follows... >>>> >>>> Rich >>>> >>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>> >>>> RRUL Tests: The charts associated with these RRUL runs are all availa= ble at http://richb-hanover.com/rrul-tests-cerowrt-3-10-28-16/ >>>> >>>> 6089/737: >>>> Min: 28.936 10pct: 29.094 Avg: 40.529 Median: 37.961 90pct: 52.= 636 Max: 77.171 Num pings: 77 >>>> >>>> 6200/750: >>>> Min: 28.715 10pct: 29.298 Avg: 41.805 Median: 39.826 90pct: 57.= 414 Max: 72.363 Num pings: 77 >>>> >>>> 6400/800: >>>> Min: 28.706 10pct: 29.119 Avg: 39.598 Median: 38.428 90pct: 52.= 351 Max: 69.492 Num pings: 78 >>>> >>>> 6600/830: >>>> Min: 28.485 10pct: 29.114 Avg: 41.708 Median: 39.753 90pct: 57.= 552 Max: 87.328 Num pings: 77 >>>> >>>> 7000/800: >>>> Min: 28.570 10pct: 29.180 Avg: 46.245 Median: 42.684 90pct: 62.= 376 Max: 169.991 Num pings: 77 >>>> Min: 28.775 10pct: 29.226 Avg: 43.628 Median: 40.446 90pct: 60.= 216 Max: 121.334 Num pings: 76 (2nd run) >>>> >>>> 7000/830: >>>> Min: 28.942 10pct: 29.285 Avg: 44.283 Median: 45.318 90pct: 58.= 002 Max: 85.035 Num pings: 78 >>>> Min: 28.951 10pct: 29.479 Avg: 43.182 Median: 41.000 90pct: 57.= 570 Max: 74.964 Num pings: 76 (2nd run) >>>> >>>> 7600/850: >>>> Min: 28.756 10pct: 29.078 Avg: 55.426 Median: 46.063 90pct: 81.= 847 Max: 277.807 Num pings: 84 >>>> >>>> SQM Disabled: >>>> Min: 28.665 10pct: 29.062 Avg: 1802.521 Median: 2051.276 90pct:= 2762.941 Max: 4217.644 Num pings: 78 >>>> >>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>> >>>> Speedtest.net: First values are the reported down/up rates in the Spee= dtest GUI >>>> >>>> 6089/737: >>>> 5.00/0.58 >>>> Min: 28.709 10pct: 28.935 Avg: 33.416 Median: 31.619 90pct: 38.= 608 Max: 49.193 Num pings: 45 >>>> >>>> 6200/750: >>>> 5.08/0.58 >>>> Min: 28.759 10pct: 29.055 Avg: 33.974 Median: 32.584 90pct: 41.= 938 Max: 46.605 Num pings: 44 >>>> >>>> 6400/800: >>>> 5.24/0.60 >>>> Min: 28.447 10pct: 28.826 Avg: 34.675 Median: 31.155 90pct: 41.= 285 Max: 81.503 Num pings: 43 >>>> >>>> 6600/830: >>>> 5.41/0.65 >>>> Min: 28.868 10pct: 29.053 Avg: 35.158 Median: 32.928 90pct: 44.= 099 Max: 51.571 Num pings: 44 >>>> >>>> 7000/800: >>>> 5.73/0.62 >>>> Min: 28.359 10pct: 28.841 Avg: 35.205 Median: 33.620 90pct: 43.= 735 Max: 54.812 Num pings: 44 >>>> >>>> 7000/830: >>>> 5.74/0.65 (5.71/0.62 second run) >>>> Min: 28.605 10pct: 29.055 Avg: 34.945 Median: 31.773 90pct: 42.= 645 Max: 54.077 Num pings: 44 >>>> Min: 28.649 10pct: 28.820 Avg: 34.866 Median: 32.398 90pct: 43.= 533 Max: 69.288 Num pings: 56 (2nd run) >>>> >>>> 7600/850: >>>> 6.20/0.67 >>>> Min: 28.835 10pct: 28.963 Avg: 36.253 Median: 34.912 90pct: 44.= 659 Max: 54.023 Num pings: 48 >>>> >>>> SQM Disabled: >>>> 6.46/0.73 >>>> Min: 28.452 10pct: 28.872 Avg: 303.754 Median: 173.498 90pct: 4= 99.678 Max: 1799.814 Num pings: 45 >>>> _______________________________________________ >>>> Cerowrt-devel mailing list >>>> Cerowrt-devel@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>> >>> >>> >>> -- >>> Dave T=E4ht >>> >>> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscr= ibe.html >> > > > > -- > Dave T=E4ht > > Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscrib= e.html --=20 Dave T=E4ht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.= html