From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-x232.google.com (mail-we0-x232.google.com [IPv6:2a00:1450:400c:c03::232]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id ADABB21F2DD for ; Sun, 2 Mar 2014 17:41:56 -0800 (PST) Received: by mail-we0-f178.google.com with SMTP id q59so2436567wes.37 for ; Sun, 02 Mar 2014 17:41:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=PwCszt39VcwUgBolWp4P/ZVp7KeFVBZqQDNmATjwQ3k=; b=CZkXfbUiM/Ddl0bL8XlylNP5kVDatp0oyO+MeUh5b1RLFHSWTGIJL4z7Zt/Rdt+nvD e/QtHUfzevay4p6kb+uWnnkuS8Gcz5p3BfjAhYf3Y37zhU2GTQuo/DmQXNIGJi8XpnFS YGwP8OE71d28vSv7hRv+7HquakAkE6XfxUaZNGe+RRlvOq9O8dUZlaIWtKIA+Zw5rcYP Z2Y7oTaU9vB2iEy1+ZVw60AR3m3lnMfHLHi9rZMwrWV4RYFcxsWblLNQO+9XPqP9gjk0 Z0v8st2AYlK6WT0ABmG4jWtVLq10G2iXNeNn7bbywsNMjZ4OULK446YrdSjhzYiHjuKR KCIw== MIME-Version: 1.0 X-Received: by 10.194.143.40 with SMTP id sb8mr13241174wjb.15.1393810914813; Sun, 02 Mar 2014 17:41:54 -0800 (PST) Received: by 10.216.8.1 with HTTP; Sun, 2 Mar 2014 17:41:54 -0800 (PST) In-Reply-To: <78561C59-590A-421E-A92D-7B93B084A6AA@gmail.com> References: <5EC471C4-64B9-4D83-AB78-5219E2090886@gmail.com> <78561C59-590A-421E-A92D-7B93B084A6AA@gmail.com> Date: Sun, 2 Mar 2014 17:41:54 -0800 Message-ID: From: Dave Taht To: Rich Brown Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: cerowrt-devel Subject: Re: [Cerowrt-devel] Better results from CeroWrt 3.10.28-16 X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Mar 2014 01:41:57 -0000 Target 30ms would be interesting. 30+ % packet loss on this test (and your link is still operable!) At your speed range the behavior of the tcp upload is governed by the initial window size more than anything else. I doubt the mac adjusts that properly so it keeps hammering away at it... my concern from looking at your upload graphs was that you were actually getting hammered so hard you were actually going through RTO (receive timeout) on the uploads. On Sun, Mar 2, 2014 at 5:27 PM, Rich Brown wrote: > > On Mar 2, 2014, at 4:41 PM, Dave Taht wrote: > >> Nice work! >> >> I have a problem in that I can't remember if target autotuning made it >> into that release or not. >> >> Coulde you do a tc -s qdisc show dev ge00 on your favorite of the >> above and paste? I still think >> target is still too low on the egress side with the current calculation. > > Pasted below. The 14.8 msec is the result of the calculation that comes f= rom the current adapt_target_to_slow_link() function for 830kbps. > >> Secondly, now that you have a setting you like, trying pie, codel, and >> ns2_codel also would be interesting. > > Need to find another available time slot, but sure. > >> efq_codel is currently uninteresting. Wasn't clear if you were using >> nfq_codel or fq_codel throughout. > > nfq_codel used for all tests. > > Rich > > root@cerowrt:~# tc -s qdisc show dev ge00 > qdisc htb 1: root refcnt 2 r2q 10 default 12 direct_packets_stat 0 > Sent 122649261 bytes 332175 pkt (dropped 0, overlimits 579184 requeues 0= ) > backlog 0b 0p requeues 0 > qdisc nfq_codel 110: parent 1:11 limit 1001p flows 1024 quantum 300 targe= t 14.8ms interval 109.8ms > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > qdisc nfq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 300 targe= t 14.8ms interval 109.8ms > Sent 122649261 bytes 332175 pkt (dropped 129389, overlimits 0 requeues 0= ) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 37646 ecn_mark 0 > new_flows_len 1 old_flows_len 2 > qdisc nfq_codel 130: parent 1:13 limit 1001p flows 1024 quantum 300 targe= t 14.8ms interval 109.8ms > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > qdisc ingress ffff: parent ffff:fff1 ---------------- > Sent 278307120 bytes 467944 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 >> >> >> On Sun, Mar 2, 2014 at 1:18 PM, Rich Brown wro= te: >>> I took some time this weekend, and ran careful speed and latency tests = on the CeroWrt 3.10.28-16 build. I have a much better understanding of how = all this works, both in theory and in practice. Here's an executive summary= of the overall test procedure with lots of details below. >>> >>> Adjusting CeroWrt's configured up- and download rates in the SQM page a= ffects both the actual data transfer rates as well as the latency. If you s= et the values too low, CeroWrt will enforce that bottleneck, and the transf= er rates will be lower than you could attain on your link. If you configure= them too high, though, the transfer rates may look better, but the latency= can go off the charts. Here's how I arrived at a good balance. >>> >>> Test Conditions: >>> >>> - Running tests from my MacBook Pro, 10.9.2. >>> - Wi-Fi off; ethernet cable direct to Netgear WNDR3700v2 with CeroWrt 3= .10.28-16. >>> - DSL service from Fairpoint, nominally "7 Mbps down/768kbps up". >>> - DSL Modem sync rate (the actual rate that bits enter/leave my house) = is 7616kbps down; 864kbps up. The line is apparently fairly clean, too. >>> - Base ping time to the nearest router at ISP (via traceroute) is 29-30= msec. >>> - To minimize other traffic, I turned off most of the computers at home= , and also quit my mail client (which is surprisingly chatty). >>> >>> The Tests: >>> >>> I ran two different tests: netperf-wrapper with the RRUL test, and spee= dtest.net. These give very different views of performance. RRUL really stre= sses the line using multiple simultaneous up and download streams. Speedtes= t.net is a consumer test that only tests one direction at a time, and for a= short time. We want to look good with both. >>> >>> For the RRUL tests, I invoked netperf-wrapper like this: netperf-wrappe= r rrul -p all_scaled -l 60 -H atl.richb-hanover.com -t text-shown-in-chart >>> For the Speedtest.net tests, I used their web GUI in the obvious way. >>> >>> For both tests, I used a script (pingstats.sh, see my next message) to = collect the ping times and give min, max, average, median, and 10th and 90t= h percentile readings. >>> >>> Test Procedure: >>> >>> I ran a series of tests starting with the up/down link rates spelled ou= t by Sebastian Moeller's amazingly detailed note last week. See https://lis= ts.bufferbloat.net/pipermail/cerowrt-devel/2014-February/002375.html Read i= t carefully. There's a lot of insight available there. >>> >>> The initial configuration was 6089/737 down/up, with the (nearly defaul= t) values for Queue Discipline (nfq_codel, simple.qos, ECN on for ingress; = NOECN for egress, auto for both ingress and egress latency targets), and AT= M link layer with 44 bytes of overhead. >>> >>> With those initial configuration values, latency was good but the speed= s were disappointing. I then re-ran the tests with CeroWrt configured for h= igher up/down link speeds to see where things broke. >>> >>> Things got better and better with increasing link rates until I hit 760= 0/850 - at that point, latency began to get quite large. (Of course, with S= QM disabled, the latency got dreadful.) >>> >>> There was an anomaly at 7000/800 kbps. The 90th percentile and max numb= ers jumped up quite a lot, but went *down* for the next test in the sequenc= e when I increased the upload speed to 7000/830. I ran the experiment twice= to confirm that behavior. >>> >>> I should also note that in the course of the experiment, I re-ran many = of these tests. Although I did not document each of the runs, the results (= speedtest.net rates and the pingstats.sh values) were quite consistent and = repeatable. >>> >>> Conclusion: >>> >>> I'm running with CeroWrt 3.10.28-16 configured for down/up 7000/830, (n= early) default Queue Discipline and ATM+44 bytes of overhead. With these co= nfigurations, latency is well in hand and my network is pretty speedy. >>> >>> We need to figure out how to explain to people what to expect re: the t= radeoff between "faster speeds" that show up in Speedtest.net (with accompa= nying crappy performance) and slightly slower speeds with a *way* better ex= perience. >>> >>> The data follows... >>> >>> Rich >>> >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> >>> RRUL Tests: The charts associated with these RRUL runs are all availab= le at http://richb-hanover.com/rrul-tests-cerowrt-3-10-28-16/ >>> >>> 6089/737: >>> Min: 28.936 10pct: 29.094 Avg: 40.529 Median: 37.961 90pct: 52.6= 36 Max: 77.171 Num pings: 77 >>> >>> 6200/750: >>> Min: 28.715 10pct: 29.298 Avg: 41.805 Median: 39.826 90pct: 57.4= 14 Max: 72.363 Num pings: 77 >>> >>> 6400/800: >>> Min: 28.706 10pct: 29.119 Avg: 39.598 Median: 38.428 90pct: 52.3= 51 Max: 69.492 Num pings: 78 >>> >>> 6600/830: >>> Min: 28.485 10pct: 29.114 Avg: 41.708 Median: 39.753 90pct: 57.5= 52 Max: 87.328 Num pings: 77 >>> >>> 7000/800: >>> Min: 28.570 10pct: 29.180 Avg: 46.245 Median: 42.684 90pct: 62.3= 76 Max: 169.991 Num pings: 77 >>> Min: 28.775 10pct: 29.226 Avg: 43.628 Median: 40.446 90pct: 60.2= 16 Max: 121.334 Num pings: 76 (2nd run) >>> >>> 7000/830: >>> Min: 28.942 10pct: 29.285 Avg: 44.283 Median: 45.318 90pct: 58.0= 02 Max: 85.035 Num pings: 78 >>> Min: 28.951 10pct: 29.479 Avg: 43.182 Median: 41.000 90pct: 57.5= 70 Max: 74.964 Num pings: 76 (2nd run) >>> >>> 7600/850: >>> Min: 28.756 10pct: 29.078 Avg: 55.426 Median: 46.063 90pct: 81.8= 47 Max: 277.807 Num pings: 84 >>> >>> SQM Disabled: >>> Min: 28.665 10pct: 29.062 Avg: 1802.521 Median: 2051.276 90pct: = 2762.941 Max: 4217.644 Num pings: 78 >>> >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> >>> Speedtest.net: First values are the reported down/up rates in the Speed= test GUI >>> >>> 6089/737: >>> 5.00/0.58 >>> Min: 28.709 10pct: 28.935 Avg: 33.416 Median: 31.619 90pct: 38.6= 08 Max: 49.193 Num pings: 45 >>> >>> 6200/750: >>> 5.08/0.58 >>> Min: 28.759 10pct: 29.055 Avg: 33.974 Median: 32.584 90pct: 41.9= 38 Max: 46.605 Num pings: 44 >>> >>> 6400/800: >>> 5.24/0.60 >>> Min: 28.447 10pct: 28.826 Avg: 34.675 Median: 31.155 90pct: 41.2= 85 Max: 81.503 Num pings: 43 >>> >>> 6600/830: >>> 5.41/0.65 >>> Min: 28.868 10pct: 29.053 Avg: 35.158 Median: 32.928 90pct: 44.0= 99 Max: 51.571 Num pings: 44 >>> >>> 7000/800: >>> 5.73/0.62 >>> Min: 28.359 10pct: 28.841 Avg: 35.205 Median: 33.620 90pct: 43.7= 35 Max: 54.812 Num pings: 44 >>> >>> 7000/830: >>> 5.74/0.65 (5.71/0.62 second run) >>> Min: 28.605 10pct: 29.055 Avg: 34.945 Median: 31.773 90pct: 42.6= 45 Max: 54.077 Num pings: 44 >>> Min: 28.649 10pct: 28.820 Avg: 34.866 Median: 32.398 90pct: 43.5= 33 Max: 69.288 Num pings: 56 (2nd run) >>> >>> 7600/850: >>> 6.20/0.67 >>> Min: 28.835 10pct: 28.963 Avg: 36.253 Median: 34.912 90pct: 44.6= 59 Max: 54.023 Num pings: 48 >>> >>> SQM Disabled: >>> 6.46/0.73 >>> Min: 28.452 10pct: 28.872 Avg: 303.754 Median: 173.498 90pct: 49= 9.678 Max: 1799.814 Num pings: 45 >>> _______________________________________________ >>> Cerowrt-devel mailing list >>> Cerowrt-devel@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >> >> >> >> -- >> Dave T=E4ht >> >> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscri= be.html > --=20 Dave T=E4ht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.= html