[Bloat] Tuning fq_codel: are there more best practices for slow connections? (<1mbit)

Yutaka intruder_tkyf at yahoo.fr
Thu Nov 2 20:31:23 EDT 2017


Hi , Sebastian.


On 2017年11月03日 05:31, Sebastian Moeller wrote:

> Hi Yutaka,
>
>
>> On Nov 2, 2017, at 17:58, Y <intruder_tkyf at yahoo.fr> wrote:
>>
>> Hi ,Moeller.
>>
>> Fomula of target is 1643 bytes / 810kbps = 0.015846836.
>>
>> It added ATM linklayer padding.
>>
>> 16ms plus 4ms as my sence :P
>>
>> My connection is 12mbps/1mbps ADSL PPPoA line.
>> and I set 7Mbps/810kbps for bypass router buffer.
> 	That sounds quite extreme, on uplink with the proper link layer adjustments you should be able to go up to 100% of the sync rate as reported by the modem (unless your ISP has another traffic shaper at a higher level). And going from 12 to 7 is also quite extreme, given that the ATM link layer adjustments will cost you another 9% of bandwidth. Then again 12/1 might be the contracted maximal rate, what are the sync rates as reported by your modem?
Link speed is
11872 bps download
832 bps upload

Why I reduce download 12 to 7 , Because according to this page, please 
see espesially download rate.
But I know that I can let around 11mbps download rate work :)
And I will set 11mbps for download
>> I changed Target 27ms Interval 540ms as you say( down delay plus upload delay).
> 	I could be out to lunch, but this large interval seems counter-intuitive. The idea (and please anybody correct me if I am wrong) is that interval should be long enough for both end points to realize a drop/ecn marking, in essence that would be the RTT of a flow (plus a small add-on to allow some variation; in practice you will need to set one interval for all flows and empirically 100ms works well, unless most of your flows go to more remote places then setting interval to the real RTT would be better. But an interval of 540ms seems quite extreme (unless you often use connections to hosts with only satellite links). Have you tried something smaller?
I did smaller something.
I thought that dropping rate is getting increased.
>> It works well  , now .
> 	Could you post the output of "tc -d qdisc" and "tc -s qdisc please" so I have a better idea what your configuration currently is?
>
> Best Regards
> 	Sebastian
My dirty stat :P

[ippan at localhost ~]$ tc -d -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 
direct_packets_stat 0 ver 3.17 direct_qlen 1000
  linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
  Sent 161531280 bytes 138625 pkt (dropped 1078, overlimits 331194 
requeues 0)
  backlog 1590b 1p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 
target 5.0ms interval 100.0ms
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
   new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 
300 target 36.0ms interval 720.0ms
  Sent 151066695 bytes 99742 pkt (dropped 1078, overlimits 0 requeues 0)
  backlog 1590b 1p requeues 0
   maxpacket 1643 drop_overlimit 0 new_flow_count 5997 ecn_mark 0
   new_flows_len 1 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 
300 target 5.0ms interval 100.0ms
  Sent 1451034 bytes 13689 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 106 drop_overlimit 0 new_flow_count 2050 ecn_mark 0
   new_flows_len 1 old_flows_len 7
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 
300 target 36.0ms interval 720.0ms
  Sent 9013551 bytes 25194 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 1643 drop_overlimit 0 new_flow_count 2004 ecn_mark 0
   new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
  Sent 59600088 bytes 149809 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 
direct_packets_stat 0 ver 3.17 direct_qlen 32
  linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
  Sent 71997532 bytes 149750 pkt (dropped 59, overlimits 42426 requeues 0)
  backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 
300 target 27.0ms interval 540.0ms ecn
  Sent 34641860 bytes 27640 pkt (dropped 1, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 1643 drop_overlimit 0 new_flow_count 1736 ecn_mark 0
   new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 
300 target 27.0ms interval 540.0ms ecn
  Sent 37355672 bytes 122110 pkt (dropped 58, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 1643 drop_overlimit 0 new_flow_count 8033 ecn_mark 0
   new_flows_len 1 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2 
2 1 2 0 0 1 1 1 1 1 1 1 1
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
[ippan at localhost ~]$ tc -d -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
qdisc htb 1: dev eno1 root refcnt 2 r2q 10 default 26 
direct_packets_stat 0 ver 3.17 direct_qlen 1000
  linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
  Sent 168960078 bytes 145643 pkt (dropped 1094, overlimits 344078 
requeues 0)
  backlog 0b 0p requeues 0
qdisc fq_codel 2: dev eno1 parent 1:2 limit 300p flows 256 quantum 300 
target 5.0ms interval 100.0ms
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
   new_flows_len 0 old_flows_len 0
qdisc fq_codel 260: dev eno1 parent 1:26 limit 300p flows 256 quantum 
300 target 36.0ms interval 720.0ms
  Sent 157686660 bytes 104157 pkt (dropped 1094, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 1643 drop_overlimit 0 new_flow_count 6547 ecn_mark 0
   new_flows_len 0 old_flows_len 1
qdisc fq_codel 110: dev eno1 parent 1:10 limit 300p flows 256 quantum 
300 target 5.0ms interval 100.0ms
  Sent 1465132 bytes 13822 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 106 drop_overlimit 0 new_flow_count 2112 ecn_mark 0
   new_flows_len 0 old_flows_len 6
qdisc fq_codel 120: dev eno1 parent 1:20 limit 300p flows 256 quantum 
300 target 36.0ms interval 720.0ms
  Sent 9808286 bytes 27664 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 1643 drop_overlimit 0 new_flow_count 2280 ecn_mark 0
   new_flows_len 0 old_flows_len 1
qdisc ingress ffff: dev eno1 parent ffff:fff1 ----------------
  Sent 62426837 bytes 155632 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
qdisc htb 1: dev ifb0 root refcnt 2 r2q 10 default 26 
direct_packets_stat 0 ver 3.17 direct_qlen 32
  linklayer atm overhead -4 mpu 64 mtu 1478 tsize 128
  Sent 75349888 bytes 155573 pkt (dropped 59, overlimits 43545 requeues 0)
  backlog 0b 0p requeues 0
qdisc fq_codel 200: dev ifb0 parent 1:20 limit 300p flows 1024 quantum 
300 target 27.0ms interval 540.0ms ecn
  Sent 37624117 bytes 30196 pkt (dropped 1, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 1643 drop_overlimit 0 new_flow_count 1967 ecn_mark 0
   new_flows_len 0 old_flows_len 1
qdisc fq_codel 260: dev ifb0 parent 1:26 limit 300p flows 1024 quantum 
300 target 27.0ms interval 540.0ms ecn
  Sent 37725771 bytes 125377 pkt (dropped 58, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
   maxpacket 1643 drop_overlimit 0 new_flow_count 8613 ecn_mark 0
   new_flows_len 0 old_flows_len 2
qdisc noqueue 0: dev virbr0 root refcnt 2
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap  1 2 2 
2 1 2 0 0 1 1 1 1 1 1 1 1
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  backlog 0b 0p requeues 0

I wrote script according to this mailinglist and sqm-script.
Thanks to Sebastian and all

Maybe, This works without problem.
 From now on , I need strict thinking.

Yutaka.
>
>> Thank you.
>>
>> Yutaka.
>>
>> On 2017年11月02日 17:25, Sebastian Moeller wrote:
>>> Hi Y.
>>>
>>>
>>>> On Nov 2, 2017, at 07:42, Y <intruder_tkyf at yahoo.fr> wrote:
>>>>
>>>> hi.
>>>>
>>>> My connection is 810kbps( <= 1Mbps).
>>>>
>>>> This is my setting For Fq_codel,
>>>> quantum=300
>>>>
>>>> target=20ms
>>>> interval=400ms
>>>>
>>>> MTU=1478 (for PPPoA)
>>>> I cannot compare well. But A Latency is around 14ms-40ms.
>>> 	Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
>>>
>>> Best Regards
>>>
>>>
>>>> Yutaka.
>>>>
>>>> On 2017年11月02日 15:01, cloneman wrote:
>>>>> I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with  slow upload speed.
>>>>>
>>>>> When running a single file upload @350kbps , I've observed the competing ICMP traffic quickly begin to drop (fq_codel) or be delayed considerably ( under sfq). From reading the tuning best practices page is not optimized for this scenario. (<2.5mbps)
>>>>> (https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
>>>>>
>>>>> Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
>>>>>
>>>>> I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
>>>>>
>>>>> My go-to solution for this would be different classes, a.k.a. traditional QoS. But ,  wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
>>>>>
>>>>> I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but  this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
>>>>>
>>>>> Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
>>>>>
>>>>> Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Bloat mailing list
>>>>>
>>>>> Bloat at lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/bloat
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat at lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>



More information about the Bloat mailing list