From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from sonic307-8.consmr.mail.ir2.yahoo.com (sonic307-8.consmr.mail.ir2.yahoo.com [87.248.110.33]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id A3C6A3CB35 for ; Thu, 2 Nov 2017 20:31:32 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.fr; s=s2048; t=1509669091; bh=hNCaUS4owMQcDDJIXItQRXS0r6TM77t822Y2xdPMR68=; h=Subject:References:From:To:Date:In-Reply-To:From:Subject; b=LMrxCdU7EaR8OyTLkY8oPf0dcrcXKWMaJegckZ//DwncGHDdxBXVrNM1aTa7Qf203SJDWD/mSB6PRjgOsNBvKbsHcs1mwGB0kKuNq1oKw5kOGMgvlLc23RCdYtCvIbwOK2EihMCfpgjR56glVHKF2B6ldK93b+mxr/qzQ2+zfmxHpTvTBv+M0LEx/JytWnpOWBc76rr9TbJm/DGBlyAxB+xt0APE+2EWRdrst9K2nHmCoqryPDhi7NHoIC10dipgZ04YFLZ7KM1XDmWRKMyoYh0/ZtovbafF88ZskPhNTM8kzCEMG6AJ5lQpQ66575pbWX9BVejCYY9tLcII5pcZtA== X-YMail-OSG: dia1M7kVM1mzbYGx4pxzrWwIqwF6._bqOLskH0LQ8FB_b2OCcCWT5Iv2dyhjU9S Mpj3KlxLxnkiObBmaLiKPXwUSrX7oIUZAVi9azXt.RxLaV0iDkVlhNF0n0BtYvUbF6kZ3JKJ_V09 VfkVHUOSzniJittgIFncIGJrvQZdeTAscsPO6hY0lv8DufuwTB2psk_2K9HphOHoiDkg0TMpq_M_ bqjqYq_CWij5.tA__iMF0Q6EN2vdXoT5B_kJJklM1PCov1v74lp0ohB2X0bZwmyMCaJotH8Gz51o eKs.7xBB5BhfGHD7uW3WSr4baJURyvTuIgQD5sxsXEot8SnDWtBKN1DGVQ9WPL9e.y.MjNUUOGET sN3DV6OkzST6IPbD.vJmvOKKVzBkCTNdf.ZtFAt2YxhVlWioqJKFj81cZLhZ.gZV.4vtNiYlmNja mhy0_fnFYm.qIBkW1wH4lVhw_xJAIHkTiI9A_DTSaRayKsjd8BCkFMXMslERnjsArL5QVP8mzliD 4umHRoU6UYk5ijs9iQhFX1tc4vjM8bkub38icZLfjLNhjpe3MHz_2p2o9RzcHWwFv3vvcOXUgWZ6 Tjw-- Received: from sonic.gate.mail.ne1.yahoo.com by sonic307.consmr.mail.ir2.yahoo.com with HTTP; Fri, 3 Nov 2017 00:31:31 +0000 Received: from [127.0.0.1] by smtp125.mail.ir2.yahoo.com with NNFMP; 03 Nov 2017 00:31:27 -0000 X-Yahoo-Newman-Id: 830037.12404.bm@smtp125.mail.ir2.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: dia1M7kVM1mzbYGx4pxzrWwIqwF6._bqOLskH0LQ8FB_b2O CcCWT5Iv2dyhjU9SMpj3KlxLxnkiObBmaLiKPXwUSrX7oIUZAVi9azXt.RxL aV0iDkVlhNF0n0BtYvUbF6kZ3JKJ_V09VfkVHUOSzniJittgIFncIGJrvQZd eTAscsPO6hY0lv8DufuwTB2psk_2K9HphOHoiDkg0TMpq_M_bqjqYq_CWij5 .tA__iMF0Q6EN2vdXoT5B_kJJklM1PCov1v74lp0ohB2X0bZwmyMCaJotH8G z51oeKs.7xBB5BhfGHD7uW3WSr4baJURyvTuIgQD5sxsXEot8SnDWtBKN1DG VQ9WPL9e.y.MjNUUOGETsN3DV6OkzST6IPbD.vJmvOKKVzBkCTNdf.ZtFAt2 YxhVlWioqJKFj81cZLhZ.gZV.4vtNiYlmNjamhy0_fnFYm.qIBkW1wH4lVhw _xJAIHkTiI9A_DTSaRayKsjd8BCkFMXMslERnjsArL5QVP8mzliD4umHRoU6 UYk5ijs9iQhFX1tc4vjM8bkub38icZLfjLNhjpe3MHz_2p2o9RzcHWwFv3vv cOXUgWZ6Tjw-- X-Yahoo-SMTP: R8REcOaswBA8tpUVQfvLNOMJ0vXRwYHSeLQ- References: <50453bcb-dc99-ed8e-7a9b-e00ccbcdb550@yahoo.fr> <026B80D8-2452-4E9E-A85E-4FBD6BFB25A1@gmx.de> <91DC6F64-931A-4D5F-8071-1FF6B838D814@gmx.de> From: Yutaka To: bloat@lists.bufferbloat.net Message-ID: Date: Fri, 3 Nov 2017 09:31:23 +0900 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: <91DC6F64-931A-4D5F-8071-1FF6B838D814@gmx.de> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Subject: Re: [Bloat] Tuning fq_codel: are there more best practices for slow connections? (<1mbit) X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Nov 2017 00:31:32 -0000 Hi , Sebastian. On 2017年11月03日 05:31, Sebastian Moeller wrote: > Hi Yutaka, > > >> On Nov 2, 2017, at 17:58, Y wrote: >> >> Hi ,Moeller. >> >> Fomula of target is 1643 bytes / 810kbps = 0.015846836. >> >> It added ATM linklayer padding. >> >> 16ms plus 4ms as my sence :P >> >> My connection is 12mbps/1mbps ADSL PPPoA line. >> and I set 7Mbps/810kbps for bypass router buffer. > That sounds quite extreme, on uplink with the proper link layer adjustments you should be able to go up to 100% of the sync rate as reported by the modem (unless your ISP has another traffic shaper at a higher level). And going from 12 to 7 is also quite extreme, given that the ATM link layer adjustments will cost you another 9% of bandwidth. Then again 12/1 might be the contracted maximal rate, what are the sync rates as reported by your modem? Link speed is 11872 bps download 832 bps upload Why I reduce download 12 to 7 , Because according to this page, please see espesially download rate. But I know that I can let around 11mbps download rate work :) And I will set 11mbps for download >> I changed Target 27ms Interval 540ms as you say( down delay plus upload delay). > I could be out to lunch, but this large interval seems counter-intuitive. The idea (and please anybody correct me if I am wrong) is that interval should be long enough for both end points to realize a drop/ecn marking, in essence that would be the RTT of a flow (plus a small add-on to allow some variation; in practice you will need to set one interval for all flows and empirically 100ms works well, unless most of your flows go to more remote places then setting interval to the real RTT would be better. But an interval of 540ms seems quite extreme (unless you often use connections to hosts with only satellite links). Have you tried something smaller? I did smaller something. I thought that dropping rate is getting increased. >> It works well , now . > Could you post the output of "tc -d qdisc" and "tc -s qdisc please" so I have a better idea what your configuration currently is? > > Best Regards > Sebastian My dirty stat :P [ippan@localhost ~]$ tc -d -s qdisc qdisc noqueue 0: dev lo root refcnt 2  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0 qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000  linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128  Sent 161531280 bytes 138625 pkt (dropped 1078, overlimits 331194 requeues 0)  backlog 1590b 1p requeues 0 qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0   new_flows_len 0 old_flows_len 0 qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms  Sent 151066695 bytes 99742 pkt (dropped 1078, overlimits 0 requeues 0)  backlog 1590b 1p requeues 0   maxpacket 1643 drop_overlimit 0 new_flow_count 5997 ecn_mark 0   new_flows_len 1 old_flows_len 1 qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms  Sent 1451034 bytes 13689 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 106 drop_overlimit 0 new_flow_count 2050 ecn_mark 0   new_flows_len 1 old_flows_len 7 qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms  Sent 9013551 bytes 25194 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 1643 drop_overlimit 0 new_flow_count 2004 ecn_mark 0   new_flows_len 0 old_flows_len 1 qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------  Sent 59600088 bytes 149809 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0 qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32  linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128  Sent 71997532 bytes 149750 pkt (dropped 59, overlimits 42426 requeues 0)  backlog 0b 0p requeues 0 qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn  Sent 34641860 bytes 27640 pkt (dropped 1, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 1643 drop_overlimit 0 new_flow_count 1736 ecn_mark 0   new_flows_len 0 old_flows_len 1 qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn  Sent 37355672 bytes 122110 pkt (dropped 58, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 1643 drop_overlimit 0 new_flow_count 8033 ecn_mark 0   new_flows_len 1 old_flows_len 2 qdisc noqueue 0: dev virbr0 root refcnt 2  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0 qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0 [ippan@localhost ~]$ tc -d -s qdisc qdisc noqueue 0: dev lo root refcnt 2  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0 qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 1000  linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128  Sent 168960078 bytes 145643 pkt (dropped 1094, overlimits 344078 requeues 0)  backlog 0b 0p requeues 0 qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0   new_flows_len 0 old_flows_len 0 qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms  Sent 157686660 bytes 104157 pkt (dropped 1094, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 1643 drop_overlimit 0 new_flow_count 6547 ecn_mark 0   new_flows_len 0 old_flows_len 1 qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 300 target 5.0ms interval 100.0ms  Sent 1465132 bytes 13822 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 106 drop_overlimit 0 new_flow_count 2112 ecn_mark 0   new_flows_len 0 old_flows_len 6 qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 300 target 36.0ms interval 720.0ms  Sent 9808286 bytes 27664 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 1643 drop_overlimit 0 new_flow_count 2280 ecn_mark 0   new_flows_len 0 old_flows_len 1 qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------  Sent 62426837 bytes 155632 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0 qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 direct_packets_stat 0 ver 3.17 direct_qlen 32  linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128  Sent 75349888 bytes 155573 pkt (dropped 59, overlimits 43545 requeues 0)  backlog 0b 0p requeues 0 qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn  Sent 37624117 bytes 30196 pkt (dropped 1, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 1643 drop_overlimit 0 new_flow_count 1967 ecn_mark 0   new_flows_len 0 old_flows_len 1 qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 300 target 27.0ms interval 540.0ms ecn  Sent 37725771 bytes 125377 pkt (dropped 58, overlimits 0 requeues 0)  backlog 0b 0p requeues 0   maxpacket 1643 drop_overlimit 0 new_flow_count 8613 ecn_mark 0   new_flows_len 0 old_flows_len 2 qdisc noqueue 0: dev virbr0 root refcnt 2  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0 qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)  backlog 0b 0p requeues 0 I wrote script according to this mailinglist and sqm-script. Thanks to Sebastian and all Maybe, This works without problem. From now on , I need strict thinking. Yutaka. > >> Thank you. >> >> Yutaka. >> >> On 2017年11月02日 17:25, Sebastian Moeller wrote: >>> Hi Y. >>> >>> >>>> On Nov 2, 2017, at 07:42, Y wrote: >>>> >>>> hi. >>>> >>>> My connection is 810kbps( <= 1Mbps). >>>> >>>> This is my setting For Fq_codel, >>>> quantum=300 >>>> >>>> target=20ms >>>> interval=400ms >>>> >>>> MTU=1478 (for PPPoA) >>>> I cannot compare well. But A Latency is around 14ms-40ms. >>> Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)... >>> >>> Best Regards >>> >>> >>>> Yutaka. >>>> >>>> On 2017年11月02日 15:01, cloneman wrote: >>>>> I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with slow upload speed. >>>>> >>>>> When running a single file upload @350kbps , I've observed the competing ICMP traffic quickly begin to drop (fq_codel) or be delayed considerably ( under sfq). From reading the tuning best practices page is not optimized for this scenario. (<2.5mbps) >>>>> (https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel >>>>> >>>>> Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice. >>>>> >>>>> I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much. >>>>> >>>>> My go-to solution for this would be different classes, a.k.a. traditional QoS. But , wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly. >>>>> >>>>> I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically? >>>>> >>>>> Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval? >>>>> >>>>> Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel? >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Bloat mailing list >>>>> >>>>> Bloat@lists.bufferbloat.net >>>>> https://lists.bufferbloat.net/listinfo/bloat >>>> _______________________________________________ >>>> Bloat mailing list >>>> Bloat@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/bloat >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat >