General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Y <intruder_tkyf@yahoo.fr>
To: bloat@lists.bufferbloat.net
Subject: Re: [Bloat] Tuning fq_codel: are there more best practices for slow connections? (<1mbit)
Date: Fri, 3 Nov 2017 01:53:50 +0900	[thread overview]
Message-ID: <e632c581-ede3-2ea0-e8cb-8870fda6693d@yahoo.fr> (raw)
In-Reply-To: <a67b5f5c-539b-502a-76bc-e57df2dc8458@pollere.com>

Hi , Kathleen.

Fomula of target is 1643 bytes / 810kbps = 0.015846836.

It added ATM linklayer padding.




On 2017年11月03日 01:33, Kathleen Nichols wrote:
> On 11/2/17 1:25 AM, Sebastian Moeller wrote:
>> Hi Y.
>>
>>
>>> On Nov 2, 2017, at 07:42, Y <intruder_tkyf@yahoo.fr> wrote:
>>>
>>> hi.
>>>
>>> My connection is 810kbps( <= 1Mbps).
>>>
>>> This is my setting For Fq_codel,
>>> quantum=300
>>>
>>> target=20ms
>>> interval=400ms
>>>
>>> MTU=1478 (for PPPoA)
>>> I cannot compare well. But A Latency is around 14ms-40ms.
>> 	Under full saturation in theory you would expect the average latency to equal the sum of upstream target and downstream target (which in your case would be 20 + ???) in reality I often see something like 1.5 to 2 times the expected value (but I have never inquired any deeper, so that might be a measuring artifact)...
> An MTU packet would cause 14.6ms of delay. To cause a codel drop, you'd
> need to have a queue of more than one packet hang around for 400ms. I
> would suspect if you looked at the dynamics of the delay you'll see it
> going up and down and probably averaging to something less than two
> packet times. Delay vs time is probably going to be oscillatory.
>
> Is the unloaded RTT on the order of 2-300 ms?
(When I do speedtest upload with ping to 8.8.8.8)
Ping RTT is around 30ms-80ms.
Avarage is around 40ms-50ms.
There is not 100ms over delay.
 >

Delay vs time is probably going to be oscillatory.

yes :)

>> Best Regards
>>
>>
>>> Yutaka.
>>>
>>> On 2017年11月02日 15:01, cloneman wrote:
>>>> I'm trying to gather advice for people stuck on older connections. It appears that having dedictated /micromanged tc classes greatly outperforms the "no knobs" fq_codel approach for connections with  slow upload speed.
>>>>
>>>> When running a single file upload @350kbps , I've observed the competing ICMP traffic quickly begin to drop (fq_codel) or be delayed considerably ( under sfq). From reading the tuning best practices page is not optimized for this scenario. (<2.5mbps)
>>>> (https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/) fq_codel
>>>>
>>>> Of particular concern is that a no-knobs SFQ works better for me than an untuned codel ( more delay but much less loss for small flows). People just flipping the fq_codel button on their router at these low speeds could be doing themselves a disservice.
>>>>
>>>> I've toyed with increasing the target and this does solve the excessive drops. I haven't played with limit and quantum all that much.
>>>>
>>>> My go-to solution for this would be different classes, a.k.a. traditional QoS. But ,  wouldn't it be possible to tune fq_codel punish the large flows 'properly' for this very low bandwidth scenario? Surely <1kb ICMP packets can squeeze through properly without being dropped if there is 350kbps available, if the competing flow is managed correctly.
>>>>
>>>> I could create a class filter by packet length, thereby moving ICMP/VoIP to its own tc class, but  this goes against "no knobs" it seems like I'm re-inventing the wheel of fair queuing - shouldn't the smallest flows never be delayed/dropped automatically?
>>>>
>>>> Lowering Quantum below 1500 is confusing, serving a fractional packet in a time interval?
>>>>
>>>> Is there real value in tuning fq_codel for these connections or should people migrate to something else like nfq_codel?
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Bloat mailing list
>>>>
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


  reply	other threads:[~2017-11-02 16:53 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-02  6:01 cloneman
2017-11-02  6:42 ` Y
2017-11-02  8:25   ` Sebastian Moeller
2017-11-02 16:33     ` Kathleen Nichols
2017-11-02 16:53       ` Y [this message]
2017-11-02 16:58     ` Y
2017-11-02 20:31       ` Sebastian Moeller
2017-11-03  0:31         ` Yutaka
2017-11-03  9:53           ` Sebastian Moeller
2017-11-03 10:10             ` Yutaka
2017-11-03 10:31               ` Sebastian Moeller
2017-11-03 10:51             ` Yutaka
2017-11-02  7:11 ` Jonathan Morton
2017-11-02  8:23 ` Sebastian Moeller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e632c581-ede3-2ea0-e8cb-8870fda6693d@yahoo.fr \
    --to=intruder_tkyf@yahoo.fr \
    --cc=bloat@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox