From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id A21853CB35 for ; Fri, 3 Nov 2017 05:53:03 -0400 (EDT) Received: from [192.168.250.101] ([134.76.241.253]) by mail.gmx.com (mrgmx102 [212.227.17.168]) with ESMTPSA (Nemesis) id 0Lxu7U-1d5c3G02OS-015H4i; Fri, 03 Nov 2017 10:53:02 +0100 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 10.3 \(3273\)) From: Sebastian Moeller In-Reply-To: Date: Fri, 3 Nov 2017 10:53:01 +0100 Cc: bloat@lists.bufferbloat.net Content-Transfer-Encoding: quoted-printable Message-Id: <0853E916-8B45-45C5-9598-A816504D7BFD@gmx.de> References: <50453bcb-dc99-ed8e-7a9b-e00ccbcdb550@yahoo.fr> <026B80D8-2452-4E9E-A85E-4FBD6BFB25A1@gmx.de> <91DC6F64-931A-4D5F-8071-1FF6B838D814@gmx.de> To: Yutaka X-Mailer: Apple Mail (2.3273) X-Provags-ID: V03:K0:My8pVEedjdU9JbY/lc0uf9cHxCV5mdWj+s53zD1bY1LsSU3Do7g EYyKK24VAL+cTJwy/philgZjZSHjW/UadpHDsI/wXSHHIAD9ek/9S0fYfuIgHr+W7/6YJSw 9CZUXlLmq5QgQkjm1YfrXdLg6bV/ANktr5S9w88R2+R0ilCsr4bJkn77DACNKJ789Ay4HP0 X+1scMcQhGHO+2IDKeriw== X-UI-Out-Filterresults: notjunk:1;V01:K0:TsFqrFMNp+w=:s//A0L7vBXlU1/EdX960tt FdHtuUvWAjztd3yzkbpHiX2zrrKnlicf+XbamnASSFQMSu1LfINcBTn6DjC2lHJavOeWe6skp svEcCR71MSxvP0764gznAVJLqu9mKW6L8b2SNq5Ttj557FWdY6JEZJblZB3c8t+Mvjud/Pddd tM3Q75MvZEHDfjrMHn+QRDTUeWpLklRuUG/W2tvfdD39ENrF9TdXs1siGzpxUk/hV/AZtGSPg kwHVetvsJjfTI6eyXbIYsY9Nq6iiwiNpZ5CDLLB0qgYnjxgudhjyfzc2RhkUwZzxNOAVbWIk+ xkiesQXw7F9PDdaeTKtEuJuEc+cjhHpgaZLADsqrnau+Rfh7eVFTp3rQ9OUywVlzjFeHyJ3Lu 0B7aaMyKo+KKyiTfVPoZ864T8NEuNOJZ69LeFSny3QJqxEtcM2YpBHCSEKJ3GphLOTNCIQ3Kv hcv/9wrJzfJGFUyBrGQtgl1w2SlSi5YlPDp6V5f5clF+lHPL/y4Pg7TgzGe1Wv4T4SnPR2Tj2 ZHO7F5BEDPmRmwb4sTBzxww9hwV73/h9mwWZpWdokSVmfs1Y/pEOUHWx/+O2Vzw49sZEZ7y1B epNhbiYnxZ536zDkChloJAsubiP4oLDbNKlG+DOLVIe2K+divAbvBgfE2gyB8NodQpowmNE9y tBl4Z9gK5D6lHsI8XhXZD/wKG5HsWnMRiZOVpQrjC28IrQcoAo4Jl/5+6s+j9g0GS+G1gcpVi d9/UzFRgpMbR3sJ9oMaQ5PUeTryw2EFgPD7M1LcM9SENe/2KkTtESkOqMytKmbv4hYBPNmPVa Fn27Wu066mY6HXGx5bIlvZvyHaV/SOIzkZyzy6pb1Ov8dmUWzw= Subject: Re: [Bloat] Tuning fq_codel: are there more best practices for slow connections? (<1mbit) X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Nov 2017 09:53:04 -0000 Hi Yutaka, > On Nov 3, 2017, at 01:31, Yutaka wrote: >=20 > Hi , Sebastian. >=20 >=20 > On 2017=E5=B9=B411=E6=9C=8803=E6=97=A5 05:31, Sebastian Moeller wrote: >=20 >> Hi Yutaka, >>=20 >>=20 >>> On Nov 2, 2017, at 17:58, Y wrote: >>>=20 >>> Hi ,Moeller. >>>=20 >>> Fomula of target is 1643 bytes / 810kbps =3D 0.015846836. >>>=20 >>> It added ATM linklayer padding. >>>=20 >>> 16ms plus 4ms as my sence :P >>>=20 >>> My connection is 12mbps/1mbps ADSL PPPoA line. >>> and I set 7Mbps/810kbps for bypass router buffer. >> That sounds quite extreme, on uplink with the proper link layer = adjustments you should be able to go up to 100% of the sync rate as = reported by the modem (unless your ISP has another traffic shaper at a = higher level). And going from 12 to 7 is also quite extreme, given that = the ATM link layer adjustments will cost you another 9% of bandwidth. = Then again 12/1 might be the contracted maximal rate, what are the sync = rates as reported by your modem? > Link speed is > 11872 bps download > 832 bps upload Thanks. With proper link layer adjustments I would aim for 11872 = * 0.9 =3D 10684.8 and 832 * 0.995 =3D 827.84; downstream shaping is a = bit approximate (even though there is a feature in cake's development = branch that has promise to make it less approximate) so I would go to 90 = or 85% of the sync bandwidth. As you know linux shapers (with proper = overlay specified) are shaping gross bandwidth so due to ATM's 48/53 = encoding the measurable goodput will be around 9% lower than would be = expected: 10685 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) =3D 9338.8 878 * (48/53) * ((1478 - 2 - 20 - 20)/(1478 + 10)) =3D 767.4 This actually excludes the typical HTTP part of your web based speedtest = but that should be in the noise. I realize what you did with the MTU/MSS = ((1478 + 10) / 48 =3D 31; so for full sized packets you have no atm/aal5 = cell padding), clever; I never bothered to go to this level of detail, = so respect! >=20 > Why I reduce download 12 to 7 , Because according to this page, please = see espesially download rate. ` Which page? > But I know that I can let around 11mbps download rate work :) > And I will set 11mbps for download As stated above I would aim for in the range of 10500 initially = and then test. >>> I changed Target 27ms Interval 540ms as you say( down delay plus = upload delay). >> I could be out to lunch, but this large interval seems = counter-intuitive. The idea (and please anybody correct me if I am = wrong) is that interval should be long enough for both end points to = realize a drop/ecn marking, in essence that would be the RTT of a flow = (plus a small add-on to allow some variation; in practice you will need = to set one interval for all flows and empirically 100ms works well, = unless most of your flows go to more remote places then setting interval = to the real RTT would be better. But an interval of 540ms seems quite = extreme (unless you often use connections to hosts with only satellite = links). Have you tried something smaller? > I did smaller something. > I thought that dropping rate is getting increased. My mental model for interval is that this is the reaction time = you are willing to give a flows endpoint to react before you drop more = aggressively, if set too high you might be trading of more bandwidth for = a higher latency under load increase (which is a valid trade-off as long = as you make it consciously ;) ). >>> It works well , now . >> Could you post the output of "tc -d qdisc" and "tc -s qdisc = please" so I have a better idea what your configuration currently is? >>=20 >> Best Regards >> Sebastian > My dirty stat :P >=20 > [ippan@localhost ~]$ tc -d -s qdisc > qdisc noqueue 0: dev lo root refcnt 2 > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 = direct_packets_stat 0 ver 3.17 direct_qlen 1000 > linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128 So you are shaping on an ethernet device (eno1) but you try to adjust = for a PPPoA, VC/Mux RFC-2364 link (so since the kernel adds 14 bytes for = ethernet interfaces, you specify -4 to get the desired IP+10; Protocol = (bytes): PPP (2), ATM AAL5 SAR (8) : Total 10), but both MPU and MTU = seem wrong to me.=20 For tcstab the tcMTU parameter really does not need to match the real = MTU, but needs to be larger than the largest packet size you expect to = encounter so we default to 2047 since that is larger than the 48/53 = expanded packet size. Together with tsize tcMTU is used to create the = look-up table that the kernel uses to calculate from real packet size to = estimated on-the-wire packetsize, the defaulf 2047, 128 will make a = table that increments in units of 16 bytes (as (2047+1)/128 =3D 16) = which will correctly deal will the 48 byte quantisation that linklayer = atm will create (48 =3D 3*16). , your values (1478+1)/128 =3D 11.5546875 = will be somewhat odd. And yes the tcstab thing is somewhat opaque. Finally mpu64 is correct for any ethernet based transport (or rather any = transport that uses full L2 ethernet frames including the frame check = sequence), but most ATM links do a) not use the FCS (and hence are not = bound to ethernets 64 byte minimum) and b) your link does not use = ethernet framing at all (as you can see from your overhead that is = smaller than the ethernet srcmac. dstmac and ethertype). So I would set tcMPU to 0, tcMTU to 2047 and let tsize at 128. Or I would give cake a trial (needs to be used in combination with a = patches tc utility); cake can do its own overhead accounting which is = way simpler than tcstab (it should also be slightly more efficient and = will deal with all possible packet sizes). > Sent 161531280 bytes 138625 pkt (dropped 1078, overlimits 331194 = requeues 0) > backlog 1590b 1p requeues 0 > qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 = target 5.0ms interval 100.0ms > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum = 300 target 36.0ms interval 720.0ms > Sent 151066695 bytes 99742 pkt (dropped 1078, overlimits 0 requeues = 0) > backlog 1590b 1p requeues 0 > maxpacket 1643 drop_overlimit 0 new_flow_count 5997 ecn_mark 0 > new_flows_len 1 old_flows_len 1 > qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum = 300 target 5.0ms interval 100.0ms > Sent 1451034 bytes 13689 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 106 drop_overlimit 0 new_flow_count 2050 ecn_mark 0 > new_flows_len 1 old_flows_len 7 > qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum = 300 target 36.0ms interval 720.0ms > Sent 9013551 bytes 25194 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 1643 drop_overlimit 0 new_flow_count 2004 ecn_mark 0 > new_flows_len 0 old_flows_len 1 > qdisc ingress ffff: dev eno1 parent ffff:fff1 ---------------- > Sent 59600088 bytes 149809 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 = direct_packets_stat 0 ver 3.17 direct_qlen 32 > linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128 > Sent 71997532 bytes 149750 pkt (dropped 59, overlimits 42426 requeues = 0) > backlog 0b 0p requeues 0 > qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum = 300 target 27.0ms interval 540.0ms ecn > Sent 34641860 bytes 27640 pkt (dropped 1, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 1643 drop_overlimit 0 new_flow_count 1736 ecn_mark 0 > new_flows_len 0 old_flows_len 1 > qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum = 300 target 27.0ms interval 540.0ms ecn > Sent 37355672 bytes 122110 pkt (dropped 58, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 1643 drop_overlimit 0 new_flow_count 8033 ecn_mark 0 > new_flows_len 1 old_flows_len 2 > qdisc noqueue 0: dev virbr0 root refcnt 2 > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 = 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > [ippan@localhost ~]$ tc -d -s qdisc > qdisc noqueue 0: dev lo root refcnt 2 > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 = direct_packets_stat 0 ver 3.17 direct_qlen 1000 > linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128 Same comments apply as above. > Sent 168960078 bytes 145643 pkt (dropped 1094, overlimits 344078 = requeues 0) > backlog 0b 0p requeues 0 > qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 = target 5.0ms interval 100.0ms > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0 > new_flows_len 0 old_flows_len 0 > qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum = 300 target 36.0ms interval 720.0ms > Sent 157686660 bytes 104157 pkt (dropped 1094, overlimits 0 requeues = 0) > backlog 0b 0p requeues 0 > maxpacket 1643 drop_overlimit 0 new_flow_count 6547 ecn_mark 0 > new_flows_len 0 old_flows_len 1 > qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum = 300 target 5.0ms interval 100.0ms > Sent 1465132 bytes 13822 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 106 drop_overlimit 0 new_flow_count 2112 ecn_mark 0 > new_flows_len 0 old_flows_len 6 > qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum = 300 target 36.0ms interval 720.0ms > Sent 9808286 bytes 27664 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 1643 drop_overlimit 0 new_flow_count 2280 ecn_mark 0 > new_flows_len 0 old_flows_len 1 > qdisc ingress ffff: dev eno1 parent ffff:fff1 ---------------- > Sent 62426837 bytes 155632 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 = direct_packets_stat 0 ver 3.17 direct_qlen 32 > linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128 > Sent 75349888 bytes 155573 pkt (dropped 59, overlimits 43545 requeues = 0) > backlog 0b 0p requeues 0 > qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum = 300 target 27.0ms interval 540.0ms ecn > Sent 37624117 bytes 30196 pkt (dropped 1, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 1643 drop_overlimit 0 new_flow_count 1967 ecn_mark 0 > new_flows_len 0 old_flows_len 1 > qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum = 300 target 27.0ms interval 540.0ms ecn > Sent 37725771 bytes 125377 pkt (dropped 58, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > maxpacket 1643 drop_overlimit 0 new_flow_count 8613 ecn_mark 0 > new_flows_len 0 old_flows_len 2 > qdisc noqueue 0: dev virbr0 root refcnt 2 > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 > qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 = 2 2 1 2 0 0 1 1 1 1 1 1 1 1 > Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) > backlog 0b 0p requeues 0 >=20 > I wrote script according to this mailinglist and sqm-script. Would you be willing to share your script? Best Regards Sebastian > Thanks to Sebastian and all >=20 > Maybe, This works without problem. > =46rom now on , I need strict thinking. >=20 > Yutaka. >>=20 >>> Thank you. >>>=20 >>> Yutaka. >>>=20 >>> On 2017=E5=B9=B411=E6=9C=8802=E6=97=A5 17:25, Sebastian Moeller = wrote: >>>> Hi Y. >>>>=20 >>>>=20 >>>>> On Nov 2, 2017, at 07:42, Y wrote: >>>>>=20 >>>>> hi. >>>>>=20 >>>>> My connection is 810kbps( <=3D 1Mbps). >>>>>=20 >>>>> This is my setting For Fq_codel, >>>>> quantum=3D300 >>>>>=20 >>>>> target=3D20ms >>>>> interval=3D400ms >>>>>=20 >>>>> MTU=3D1478 (for PPPoA) >>>>> I cannot compare well. But A Latency is around 14ms-40ms. >>>> Under full saturation in theory you would expect the average = latency to equal the sum of upstream target and downstream target (which = in your case would be 20 + ???) in reality I often see something like = 1.5 to 2 times the expected value (but I have never inquired any deeper, = so that might be a measuring artifact)... >>>>=20 >>>> Best Regards >>>>=20 >>>>=20 >>>>> Yutaka. >>>>>=20 >>>>> On 2017=E5=B9=B411=E6=9C=8802=E6=97=A5 15:01, cloneman wrote: >>>>>> I'm trying to gather advice for people stuck on older = connections. It appears that having dedictated /micromanged tc classes = greatly outperforms the "no knobs" fq_codel approach for connections = with slow upload speed. >>>>>>=20 >>>>>> When running a single file upload @350kbps , I've observed the = competing ICMP traffic quickly begin to drop (fq_codel) or be delayed = considerably ( under sfq). =46rom reading the tuning best practices page = is not optimized for this scenario. (<2.5mbps) >>>>>> = (https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchm= arking_Codel_and_FQ_Codel/) fq_codel >>>>>>=20 >>>>>> Of particular concern is that a no-knobs SFQ works better for me = than an untuned codel ( more delay but much less loss for small flows). = People just flipping the fq_codel button on their router at these low = speeds could be doing themselves a disservice. >>>>>>=20 >>>>>> I've toyed with increasing the target and this does solve the = excessive drops. I haven't played with limit and quantum all that much. >>>>>>=20 >>>>>> My go-to solution for this would be different classes, a.k.a. = traditional QoS. But , wouldn't it be possible to tune fq_codel punish = the large flows 'properly' for this very low bandwidth scenario? Surely = <1kb ICMP packets can squeeze through properly without being dropped if = there is 350kbps available, if the competing flow is managed correctly. >>>>>>=20 >>>>>> I could create a class filter by packet length, thereby moving = ICMP/VoIP to its own tc class, but this goes against "no knobs" it = seems like I'm re-inventing the wheel of fair queuing - shouldn't the = smallest flows never be delayed/dropped automatically? >>>>>>=20 >>>>>> Lowering Quantum below 1500 is confusing, serving a fractional = packet in a time interval? >>>>>>=20 >>>>>> Is there real value in tuning fq_codel for these connections or = should people migrate to something else like nfq_codel? >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>> _______________________________________________ >>>>>> Bloat mailing list >>>>>>=20 >>>>>> Bloat@lists.bufferbloat.net >>>>>> https://lists.bufferbloat.net/listinfo/bloat >>>>> _______________________________________________ >>>>> Bloat mailing list >>>>> Bloat@lists.bufferbloat.net >>>>> https://lists.bufferbloat.net/listinfo/bloat >>> _______________________________________________ >>> Bloat mailing list >>> Bloat@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/bloat >>=20 >=20 > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat