* [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
[not found] <152717340941.28154.812883711295847116.malone@soybean.canonical.com>
@ 2018-05-24 15:38 ` Jan Ceuleers
2018-06-04 11:28 ` Bless, Roland (TM)
0 siblings, 1 reply; 29+ messages in thread
From: Jan Ceuleers @ 2018-05-24 15:38 UTC (permalink / raw)
To: bloat
Took 3 years after Dave approached them, but Ubuntu is finally adopting
fq_codel as the default qdisc.
-------- Forwarded Message --------
Subject: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc
for networking
Date: Thu, 24 May 2018 14:50:09 -0000
From: Laurent Bonnaud <L.Bonnaud@laposte.net>
Reply-To: Bug 1436945 <1436945@bugs.launchpad.net>
To: jan.ceuleers@computer.org
I also see fq_codel used as default:
# cat /proc/sys/net/core/default_qdisc
fq_codel
# ip addr
[...]
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
** Changed in: linux (Ubuntu)
Status: Confirmed => Fix Released
--
You received this bug notification because you are subscribed to the bug
report.
https://bugs.launchpad.net/bugs/1436945
Title:
devel: consider fq_codel as the default qdisc for networking
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1436945/+subscriptions
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-05-24 15:38 ` [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking Jan Ceuleers
@ 2018-06-04 11:28 ` Bless, Roland (TM)
2018-06-04 13:16 ` Jonas Mårtensson
2018-06-04 23:00 ` [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking David Lang
0 siblings, 2 replies; 29+ messages in thread
From: Bless, Roland (TM) @ 2018-06-04 11:28 UTC (permalink / raw)
To: Jan Ceuleers, bloat
Hi,
Am 24.05.2018 um 17:38 schrieb Jan Ceuleers:
> Took 3 years after Dave approached them, but Ubuntu is finally adopting
> fq_codel as the default qdisc.
Yes, if the Linux kernel is forwarding packets it makes a lot of sense,
but I don't understand why it make sense for ordinary end-systems.
Didn't Byte Queue Limits (BQL) suffice? Just curious...
Regards
Roland
> -------- Forwarded Message --------
> Subject: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc
> for networking
> Date: Thu, 24 May 2018 14:50:09 -0000
> From: Laurent Bonnaud <L.Bonnaud@laposte.net>
> Reply-To: Bug 1436945 <1436945@bugs.launchpad.net>
> To: jan.ceuleers@computer.org
>
> I also see fq_codel used as default:
>
> # cat /proc/sys/net/core/default_qdisc
> fq_codel
>
> # ip addr
> [...]
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
> UP group default qlen 1000
>
>
> ** Changed in: linux (Ubuntu)
> Status: Confirmed => Fix Released
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-04 11:28 ` Bless, Roland (TM)
@ 2018-06-04 13:16 ` Jonas Mårtensson
2018-06-04 17:08 ` Dave Taht
2018-06-04 23:00 ` [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking David Lang
1 sibling, 1 reply; 29+ messages in thread
From: Jonas Mårtensson @ 2018-06-04 13:16 UTC (permalink / raw)
To: Bless, Roland (TM); +Cc: Jan Ceuleers, bloat
[-- Attachment #1: Type: text/plain, Size: 918 bytes --]
On Mon, Jun 4, 2018 at 1:28 PM, Bless, Roland (TM) <roland.bless@kit.edu>
wrote:
> Hi,
>
> Am 24.05.2018 um 17:38 schrieb Jan Ceuleers:
> > Took 3 years after Dave approached them, but Ubuntu is finally adopting
> > fq_codel as the default qdisc.
>
> Yes, if the Linux kernel is forwarding packets it makes a lot of sense,
> but I don't understand why it make sense for ordinary end-systems.
> Didn't Byte Queue Limits (BQL) suffice? Just curious...
>
Actually, for a long time now the codel wiki has recommended using sch_fq
instead of fq_codel for servers.
It seems Ubuntu defaulting to fq_codel is a result of using systemd which
adopted it as default almost 4 years ago. There was a request to change the
default in systemd from fq_codel to sch_fq in order to support pacing but
the issue was closed when tcp internal pacing was introduced in linux 4.13:
https://github.com/systemd/systemd/issues/5090
/Jonas
[-- Attachment #2: Type: text/html, Size: 1466 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-04 13:16 ` Jonas Mårtensson
@ 2018-06-04 17:08 ` Dave Taht
2018-06-04 18:22 ` Jonas Mårtensson
0 siblings, 1 reply; 29+ messages in thread
From: Dave Taht @ 2018-06-04 17:08 UTC (permalink / raw)
To: Jonas Mårtensson; +Cc: Bless, Roland (TM), bloat
On Mon, Jun 4, 2018 at 6:16 AM, Jonas Mårtensson
<martensson.jonas@gmail.com> wrote:
>
>
> On Mon, Jun 4, 2018 at 1:28 PM, Bless, Roland (TM) <roland.bless@kit.edu>
> wrote:
>>
>> Hi,
>>
>> Am 24.05.2018 um 17:38 schrieb Jan Ceuleers:
>> > Took 3 years after Dave approached them, but Ubuntu is finally adopting
>> > fq_codel as the default qdisc.
>>
>> Yes, if the Linux kernel is forwarding packets it makes a lot of sense,
>> but I don't understand why it make sense for ordinary end-systems.
>> Didn't Byte Queue Limits (BQL) suffice? Just curious...
>
>
> Actually, for a long time now the codel wiki has recommended using sch_fq
> instead of fq_codel for servers.
That recommendation predates the tcp pacing change you note below.
I'm pretty convinced fq_codel is now the best all around "default"
qdisc, and we should rework our recomendation.
> It seems Ubuntu defaulting to fq_codel is a result of using systemd which
> adopted it as default almost 4 years ago. There was a request to change the
> default in systemd from fq_codel to sch_fq in order to support pacing but
> the issue was closed when tcp internal pacing was introduced in linux 4.13:
>
> https://github.com/systemd/systemd/issues/5090
>
> /Jonas
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-04 17:08 ` Dave Taht
@ 2018-06-04 18:22 ` Jonas Mårtensson
2018-06-04 21:36 ` Jonathan Morton
2018-06-05 0:22 ` [Bloat] Fwd: " Michael Richardson
0 siblings, 2 replies; 29+ messages in thread
From: Jonas Mårtensson @ 2018-06-04 18:22 UTC (permalink / raw)
To: Dave Taht; +Cc: Bless, Roland (TM), bloat
[-- Attachment #1: Type: text/plain, Size: 1831 bytes --]
Speaking about systemd defaults, they just enabled ecn for outgoing
connections:
https://github.com/systemd/systemd/pull/9143
/Jonas
On Mon, Jun 4, 2018 at 7:08 PM, Dave Taht <dave.taht@gmail.com> wrote:
> On Mon, Jun 4, 2018 at 6:16 AM, Jonas Mårtensson
> <martensson.jonas@gmail.com> wrote:
> >
> >
> > On Mon, Jun 4, 2018 at 1:28 PM, Bless, Roland (TM) <roland.bless@kit.edu
> >
> > wrote:
> >>
> >> Hi,
> >>
> >> Am 24.05.2018 um 17:38 schrieb Jan Ceuleers:
> >> > Took 3 years after Dave approached them, but Ubuntu is finally
> adopting
> >> > fq_codel as the default qdisc.
> >>
> >> Yes, if the Linux kernel is forwarding packets it makes a lot of sense,
> >> but I don't understand why it make sense for ordinary end-systems.
> >> Didn't Byte Queue Limits (BQL) suffice? Just curious...
> >
> >
> > Actually, for a long time now the codel wiki has recommended using sch_fq
> > instead of fq_codel for servers.
>
> That recommendation predates the tcp pacing change you note below.
>
> I'm pretty convinced fq_codel is now the best all around "default"
> qdisc, and we should rework our recomendation.
>
> > It seems Ubuntu defaulting to fq_codel is a result of using systemd which
> > adopted it as default almost 4 years ago. There was a request to change
> the
> > default in systemd from fq_codel to sch_fq in order to support pacing but
> > the issue was closed when tcp internal pacing was introduced in linux
> 4.13:
> >
> > https://github.com/systemd/systemd/issues/5090
> >
> > /Jonas
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
> >
>
>
>
> --
>
> Dave Täht
> CEO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-669-226-2619
>
[-- Attachment #2: Type: text/html, Size: 3052 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-04 18:22 ` Jonas Mårtensson
@ 2018-06-04 21:36 ` Jonathan Morton
2018-06-05 15:10 ` [Bloat] " Jonathan Foulkes
2018-06-05 0:22 ` [Bloat] Fwd: " Michael Richardson
1 sibling, 1 reply; 29+ messages in thread
From: Jonathan Morton @ 2018-06-04 21:36 UTC (permalink / raw)
To: Jonas Mårtensson; +Cc: Dave Taht, bloat
> On 4 Jun, 2018, at 9:22 pm, Jonas Mårtensson <martensson.jonas@gmail.com> wrote:
>
> Speaking about systemd defaults, they just enabled ecn for outgoing connections:
That is also good news. With Apple *and* Ubuntu using it by default, we should finally get critical mass of ECN traffic and any remaining blackholes fixed, making it easy for everyone else to justify turning it on as well.
- Jonathan Morton
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-04 11:28 ` Bless, Roland (TM)
2018-06-04 13:16 ` Jonas Mårtensson
@ 2018-06-04 23:00 ` David Lang
2018-06-05 7:44 ` Mario Hock
1 sibling, 1 reply; 29+ messages in thread
From: David Lang @ 2018-06-04 23:00 UTC (permalink / raw)
To: Bless, Roland (TM); +Cc: Jan Ceuleers, bloat
On Mon, 4 Jun 2018, Bless, Roland (TM) wrote:
> Hi,
>
> Am 24.05.2018 um 17:38 schrieb Jan Ceuleers:
>> Took 3 years after Dave approached them, but Ubuntu is finally adopting
>> fq_codel as the default qdisc.
>
> Yes, if the Linux kernel is forwarding packets it makes a lot of sense,
> but I don't understand why it make sense for ordinary end-systems.
> Didn't Byte Queue Limits (BQL) suffice? Just curious...
no, BQL makes things much better (and make it possible for more advanced quueing
to take place), but you can still run into problems where a bulk stream can
flood the output queue so that other traffic suffers badly.
with fq_codel, the available bandwidth is distributed in a way that ends up
being much more functional.
It turns out that the behavior to prioritize new and sparse connections
significantly improves perceived performance (no more long delays in DNS lookups
before you start doing any real work for example)
without BQL, you can't even see the rest of the problems, but BQL doesn't solve
everything.
David Lang
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-04 18:22 ` Jonas Mårtensson
2018-06-04 21:36 ` Jonathan Morton
@ 2018-06-05 0:22 ` Michael Richardson
2018-06-05 6:21 ` Jonas Mårtensson
1 sibling, 1 reply; 29+ messages in thread
From: Michael Richardson @ 2018-06-05 0:22 UTC (permalink / raw)
To: =?UTF-8?Q?Jonas_M=C3=A5rtensson?=; +Cc: Dave Taht, bloat
[-- Attachment #1: Type: text/plain, Size: 599 bytes --]
Jonas Mårtensson <martensson.jonas@gmail.com> wrote:
> Speaking about systemd defaults, they just enabled ecn for outgoing
> connections:
> https://github.com/systemd/systemd/pull/9143
What about PLPMTU? Do you think they might tweak that too?
net.ipv4.tcp_mtu_probing=2
(despite name, applies to IPv6 too)
--
] Never tell me the odds! | ipv6 mesh networks [
] Michael Richardson, Sandelman Software Works | network architect [
] mcr@sandelman.ca http://www.sandelman.ca/ | ruby on rails [
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 464 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 0:22 ` [Bloat] Fwd: " Michael Richardson
@ 2018-06-05 6:21 ` Jonas Mårtensson
2018-06-06 4:14 ` Mikael Abrahamsson
2018-06-07 12:56 ` [Bloat] Fwd: [Bug 1436945] -> What other options/bufferbloat-advice ... ? Simon Iremonger (bufferbloat)
0 siblings, 2 replies; 29+ messages in thread
From: Jonas Mårtensson @ 2018-06-05 6:21 UTC (permalink / raw)
To: Michael Richardson; +Cc: Dave Taht, bloat
[-- Attachment #1: Type: text/plain, Size: 557 bytes --]
On Tue, Jun 5, 2018 at 2:22 AM, Michael Richardson <mcr@sandelman.ca> wrote:
>
> Jonas Mårtensson <martensson.jonas@gmail.com> wrote:
> > Speaking about systemd defaults, they just enabled ecn for outgoing
> > connections:
>
> > https://github.com/systemd/systemd/pull/9143
>
> What about PLPMTU? Do you think they might tweak that too?
>
> net.ipv4.tcp_mtu_probing=2
> (despite name, applies to IPv6 too)
Maybe, suggest it on their github. But I would maybe propose instead
net.ipv4.tcp_mtu_probing=1.
/Jonas
[-- Attachment #2: Type: text/html, Size: 2327 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-04 23:00 ` [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking David Lang
@ 2018-06-05 7:44 ` Mario Hock
2018-06-05 7:49 ` Jonathan Morton
0 siblings, 1 reply; 29+ messages in thread
From: Mario Hock @ 2018-06-05 7:44 UTC (permalink / raw)
To: bloat
Am 05.06.2018 um 01:00 schrieb David Lang:
> On Mon, 4 Jun 2018, Bless, Roland (TM) wrote:
>
>> Hi,
>>
>> Am 24.05.2018 um 17:38 schrieb Jan Ceuleers:
>>> Took 3 years after Dave approached them, but Ubuntu is finally adopting
>>> fq_codel as the default qdisc.
>>
>> Yes, if the Linux kernel is forwarding packets it makes a lot of sense,
>> but I don't understand why it make sense for ordinary end-systems.
>> Didn't Byte Queue Limits (BQL) suffice? Just curious...
>
> no, BQL makes things much better (and make it possible for more advanced
> quueing to take place), but you can still run into problems where a bulk
> stream can flood the output queue so that other traffic suffers badly.
>
> with fq_codel, the available bandwidth is distributed in a way that ends
> up being much more functional.
>
> It turns out that the behavior to prioritize new and sparse connections
> significantly improves perceived performance (no more long delays in DNS
> lookups before you start doing any real work for example)
>
> without BQL, you can't even see the rest of the problems, but BQL
> doesn't solve everything.
Just to make sure that I got your answer correctly. The benefit for
endsystems comes from the "fq" (flow queuing) part, not from the "codel"
part of fq_codel?
Mario Hock
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 7:44 ` Mario Hock
@ 2018-06-05 7:49 ` Jonathan Morton
2018-06-05 11:01 ` Mario Hock
2018-08-16 21:08 ` Dave Taht
0 siblings, 2 replies; 29+ messages in thread
From: Jonathan Morton @ 2018-06-05 7:49 UTC (permalink / raw)
To: Mario Hock; +Cc: bloat
> On 5 Jun, 2018, at 10:44 am, Mario Hock <mario.hock@kit.edu> wrote:
>
> Just to make sure that I got your answer correctly. The benefit for endsystems comes from the "fq" (flow queuing) part, not from the "codel" part of fq_codel?
That's a fair characterisation, yes.
In fact, even for middleboxes, the "flow isolation" semantics of FQ have the most impact on reducing inter-flow induced latency. The "codel" part (AQM) helps with intra-flow latency, which is usually much less important once flow isolation is in place, but is still worth having.
- Jonathan Morton
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 7:49 ` Jonathan Morton
@ 2018-06-05 11:01 ` Mario Hock
2018-08-16 21:08 ` Dave Taht
1 sibling, 0 replies; 29+ messages in thread
From: Mario Hock @ 2018-06-05 11:01 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Am 05.06.2018 um 09:49 schrieb Jonathan Morton:
>> On 5 Jun, 2018, at 10:44 am, Mario Hock <mario.hock@kit.edu> wrote:
>>
>> Just to make sure that I got your answer correctly. The benefit for endsystems comes from the "fq" (flow queuing) part, not from the "codel" part of fq_codel?
>
> That's a fair characterisation, yes.
>
> In fact, even for middleboxes, the "flow isolation" semantics of FQ have the most impact on reducing inter-flow induced latency. The "codel" part (AQM) helps with intra-flow latency, which is usually much less important once flow isolation is in place, but is still worth having.
Thanks for the confirmation.
A potential drawback of using the codel part (of fq_codel) in the
endsystems is that it can cause packet drops already at the sender.
I could actually confirm this assumption with a very simple experiment
consisting of two servers connected over a 1Gbit/s link and 100 parallel
flows (iperf3). With fq_codel I had 5,000-10,000 retransmissions within
60s. With fq (or pfifo_fast) no packets are dropped. (I presume either
"TCP small queues" or backpressure keeps the queues from overflowing.)
Also, ping times (delays for short flows) were similar with fq and
fq_codel (mostly <= 1ms).
Is there any advantage of using fq_codel over fq at the endsystems?
Mario Hock
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-04 21:36 ` Jonathan Morton
@ 2018-06-05 15:10 ` Jonathan Foulkes
2018-06-05 17:24 ` Jonathan Morton
2018-06-05 18:34 ` Sebastian Moeller
0 siblings, 2 replies; 29+ messages in thread
From: Jonathan Foulkes @ 2018-06-05 15:10 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat
Jonathan, in the past the recommendation was for NOECN on egress if capacity <4Mbps. Is that still the case in light of this?
Thanks,
Jonathan Foulkes
> On Jun 4, 2018, at 5:36 PM, Jonathan Morton <chromatix99@gmail.com> wrote:
>
>> On 4 Jun, 2018, at 9:22 pm, Jonas Mårtensson <martensson.jonas@gmail.com> wrote:
>>
>> Speaking about systemd defaults, they just enabled ecn for outgoing connections:
>
> That is also good news. With Apple *and* Ubuntu using it by default, we should finally get critical mass of ECN traffic and any remaining blackholes fixed, making it easy for everyone else to justify turning it on as well.
>
> - Jonathan Morton
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 15:10 ` [Bloat] " Jonathan Foulkes
@ 2018-06-05 17:24 ` Jonathan Morton
2018-06-05 18:34 ` Sebastian Moeller
1 sibling, 0 replies; 29+ messages in thread
From: Jonathan Morton @ 2018-06-05 17:24 UTC (permalink / raw)
To: Jonathan Foulkes; +Cc: bloat
> On 5 Jun, 2018, at 6:10 pm, Jonathan Foulkes <jf@jonathanfoulkes.com> wrote:
>
> Jonathan, in the past the recommendation was for NOECN on egress if capacity <4Mbps. Is that still the case in light of this?
I would always use ECN, no exceptions - unless the sender is using a TCP congestion control algorithm that doesn't support it (eg. BBR currently). That's true for both fq_codel and Cake.
With ECN, codel's action doesn't drop packets, only resizes the congestion window.
- Jonathan Morton
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 15:10 ` [Bloat] " Jonathan Foulkes
2018-06-05 17:24 ` Jonathan Morton
@ 2018-06-05 18:34 ` Sebastian Moeller
2018-06-05 19:31 ` Jonathan Morton
2018-06-06 7:44 ` [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking Bless, Roland (TM)
1 sibling, 2 replies; 29+ messages in thread
From: Sebastian Moeller @ 2018-06-05 18:34 UTC (permalink / raw)
To: Jonathan Foulkes; +Cc: Jonathan Morton, bloat
Hi Jonathan,
> On Jun 5, 2018, at 17:10, Jonathan Foulkes <jf@jonathanfoulkes.com> wrote:
>
> Jonathan, in the past the recommendation was for NOECN on egress if capacity <4Mbps. Is that still the case in light of this?
>
> Thanks,
>
> Jonathan Foulkes
>
>> On Jun 4, 2018, at 5:36 PM, Jonathan Morton <chromatix99@gmail.com> wrote:
>>
>>> On 4 Jun, 2018, at 9:22 pm, Jonas Mårtensson <martensson.jonas@gmail.com> wrote:
>>>
>>> Speaking about systemd defaults, they just enabled ecn for outgoing connections:
>>
>> That is also good news. With Apple *and* Ubuntu using it by default, we should finally get critical mass of ECN traffic and any remaining blackholes fixed, making it easy for everyone else to justify turning it on as well.
The rationale for that decision still is valid, at low bandwidth every opportunity to send a packet matters and every packet being transferred will increase the queued packets delay by its serialization delay. The question IMHO is more is 4 Mbps a reasonable threshold to disable ECN or not.
Here are the serialization delays for a few selected bandwidths:
1000*(1538*8)/(500*1000) = 24.61 ms
1000*(1538*8)/(1000*1000) = 12.30 ms
1000*(1538*8)/(2000*1000) = 6.15 ms
1000*(1538*8)/(4000*1000) = 3.08 ms
1000*(1538*8)/(8000*1000) = 1.54 ms
1000*(1538*8)/(10000*1000) = 1.23 ms
1000*(1538*8)/(12000*1000) = 1.03 ms
Personally, I guess I sort of agree with the <= 4Mbps threshold, maybe 2Mbps, but at <=1Mbps the serialization delay gets painful.
In sqm-scripts we currently unconditionally default to egress(ECN) off, which might be to pessimistic about the usual egress bandwidths.
Best Regards
>>
>> - Jonathan Morton
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 18:34 ` Sebastian Moeller
@ 2018-06-05 19:31 ` Jonathan Morton
2018-06-06 6:53 ` Sebastian Moeller
2018-08-13 22:29 ` [Bloat] ecn redux Dave Taht
2018-06-06 7:44 ` [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking Bless, Roland (TM)
1 sibling, 2 replies; 29+ messages in thread
From: Jonathan Morton @ 2018-06-05 19:31 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Jonathan Foulkes, bloat
> On 5 Jun, 2018, at 9:34 pm, Sebastian Moeller <moeller0@gmx.de> wrote:
>
> The rationale for that decision still is valid, at low bandwidth every opportunity to send a packet matters…
Yes, which is why the DRR++ algorithm is used to carefully choose which flow to send a packet from.
> …and every packet being transferred will increase the queued packets delay by its serialization delay.
This is trivially true, but has no effect whatsoever on inter-flow induced latency, only intra-flow delay, which is already managed adequately well by an ECN-aware sender.
May I remind you that Cake never drops the last packet in a flow subqueue due to AQM action, but may still apply an ECN mark to it. That's because dropping a tail packet carries a risk of incurring an RTO before retransmission occurs, rather than "only" an RTT delay. Both RTO and RTT are always greater than the serialisation delay of a single packet.
Which is why ECN remains valuable even on very low-bandwidth links.
- Jonathan Morton
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 6:21 ` Jonas Mårtensson
@ 2018-06-06 4:14 ` Mikael Abrahamsson
2018-06-07 12:56 ` [Bloat] Fwd: [Bug 1436945] -> What other options/bufferbloat-advice ... ? Simon Iremonger (bufferbloat)
1 sibling, 0 replies; 29+ messages in thread
From: Mikael Abrahamsson @ 2018-06-06 4:14 UTC (permalink / raw)
To: Jonas Mårtensson; +Cc: Michael Richardson, bloat
[-- Attachment #1: Type: text/plain, Size: 461 bytes --]
On Tue, 5 Jun 2018, Jonas Mårtensson wrote:
>> What about PLPMTU? Do you think they might tweak that too?
>>
>> net.ipv4.tcp_mtu_probing=2
>> (despite name, applies to IPv6 too)
>
>
> Maybe, suggest it on their github. But I would maybe propose instead
> net.ipv4.tcp_mtu_probing=1.
MTU probing would be awsome. I am great fan of PLPMTU and this should be
default-on everywhere in all protocols.
--
Mikael Abrahamsson email: swmike@swm.pp.se
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 19:31 ` Jonathan Morton
@ 2018-06-06 6:53 ` Sebastian Moeller
2018-06-06 13:04 ` Jonathan Morton
2018-08-13 22:29 ` [Bloat] ecn redux Dave Taht
1 sibling, 1 reply; 29+ messages in thread
From: Sebastian Moeller @ 2018-06-06 6:53 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Jonathan Foulkes, bloat
Hi Jonathan,
> On Jun 5, 2018, at 21:31, Jonathan Morton <chromatix99@gmail.com> wrote:
>
>> On 5 Jun, 2018, at 9:34 pm, Sebastian Moeller <moeller0@gmx.de> wrote:
>>
>> The rationale for that decision still is valid, at low bandwidth every opportunity to send a packet matters…
>
> Yes, which is why the DRR++ algorithm is used to carefully choose which flow to send a packet from.
Well, but look at it that way, the longer the traversal path after the cake instance the higher the probability that the packet gets dropped by a later hop. So on ingress we in all likelihood already passed the main bottleneck (but beware of the local WLAN quality) on egress most of the path is still ahead of us.
>
>> …and every packet being transferred will increase the queued packets delay by its serialization delay.
>
> This is trivially true, but has no effect whatsoever on inter-flow induced latency, only intra-flow delay, which is already managed adequately well by an ECN-aware sender.
I am not sure that I am getting your point, at 0.5Mbps every full-MTU packet will hog the line foe 20+ milliseconds, so all other flows will incur at least that 20+ ms additional latency, this is independent of inter- or intra-flow perspective, no?.
>
> May I remind you that Cake never drops the last packet in a flow subqueue due to AQM action, but may still apply an ECN mark to it.
I believe this not dropping is close to codel's behavior?
> That's because dropping a tail packet carries a risk of incurring an RTO before retransmission occurs, rather than "only" an RTT delay. Both RTO and RTT are always greater than the serialisation delay of a single packet.
Thanks for the elaboration; clever! But dropping a packet will instantaneous free bandwidth for other flows independent of whether the sender has already realized that fact; sure the flow with the dropped packet will not as smoothly revover from the loss as it would deal with ECN signaling, but tat is not the vintage point from which I am looking at the issue here..
>
> Which is why ECN remains valuable even on very low-bandwidth links.
Well, I guess I should revisit that and try to get some data at low bandwidths, but my hunch still is that
>
> - Jonathan Morton
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 18:34 ` Sebastian Moeller
2018-06-05 19:31 ` Jonathan Morton
@ 2018-06-06 7:44 ` Bless, Roland (TM)
2018-06-06 8:15 ` Sebastian Moeller
1 sibling, 1 reply; 29+ messages in thread
From: Bless, Roland (TM) @ 2018-06-06 7:44 UTC (permalink / raw)
To: Sebastian Moeller, Jonathan Foulkes; +Cc: Jonathan Morton, bloat
Hi,
Am 05.06.2018 um 20:34 schrieb Sebastian Moeller:
> The rationale for that decision still is valid, at low bandwidth every opportunity to send a packet matters and every packet being transferred will increase the queued packets delay by its serialization delay. The question IMHO is more is 4 Mbps a reasonable threshold to disable ECN or not.
ECN should be enabled irrespective of the current bottleneck bandwidth.
I don't see any relationship between serialization delay with ECN.
Congestion control is about determining the right amount of inflight
data. ECN just provides an explicit congestion signal as feedback
and helps anyway. The main problem is IMHO that most routers have
no AQM in place in order to set the CE codepoint appropriately...
Regards,
Roland
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-06 7:44 ` [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking Bless, Roland (TM)
@ 2018-06-06 8:15 ` Sebastian Moeller
2018-06-06 8:55 ` Mario Hock
0 siblings, 1 reply; 29+ messages in thread
From: Sebastian Moeller @ 2018-06-06 8:15 UTC (permalink / raw)
To: Bless, Roland (TM); +Cc: Jonathan Foulkes, Jonathan Morton, bloat
> On Jun 6, 2018, at 09:44, Bless, Roland (TM) <roland.bless@kit.edu> wrote:
>
> Hi,
>
> Am 05.06.2018 um 20:34 schrieb Sebastian Moeller:
>> The rationale for that decision still is valid, at low bandwidth every opportunity to send a packet matters and every packet being transferred will increase the queued packets delay by its serialization delay. The question IMHO is more is 4 Mbps a reasonable threshold to disable ECN or not.
>
> ECN should be enabled irrespective of the current bottleneck bandwidth.
> I don't see any relationship between serialization delay with ECN.
> Congestion control is about determining the right amount of inflight
> data. ECN just provides an explicit congestion signal as feedback
> and helps anyway. The main problem is IMHO that most routers have
> no AQM in place in order to set the CE codepoint appropriately...
Well, sending a packet incurs serialization delay for all queued up packets, so not sending a packet reduces the delay for all packets that are sent by exactly the serialization delay. If egress bandwidth is precious (so when it is congested and low in comparison with the amount of data that should be send) resorting to congestion signaling by dropping seems okay to me, as that immeiately frees up a "TX-slot" for another flow.
Now, I do agree that for the affected flow itself ECN should be better as signaling is going to be faster than waiting for 3 DupACKs. But as always the proof is in the data, so I will refrain from making-up more hypothesis and rather try to look into acquiring data.
>
> Regards,
> Roland
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-06 8:15 ` Sebastian Moeller
@ 2018-06-06 8:55 ` Mario Hock
0 siblings, 0 replies; 29+ messages in thread
From: Mario Hock @ 2018-06-06 8:55 UTC (permalink / raw)
To: bloat
Am 06.06.2018 um 10:15 schrieb Sebastian Moeller:
> Well, sending a packet incurs serialization delay for all queued up packets, so not sending a packet reduces the delay for all packets that are sent by exactly the serialization delay. If egress bandwidth is precious (so when it is congested and low in comparison with the amount of data that should be send) resorting to congestion signaling by dropping seems okay to me, as that immeiately frees up a "TX-slot" for another flow.
If the packet is dropped and the "TX-slot" is freed up, two things can
happen:
1. The next packet belongs to the same flow. In this case, a TCP flow
has no benefit because head-of-line-block occurs until the packet is
retransmitted. (This might be different for loss-tolerant
latency-sensitive UDP traffic, though.)
2. The next packet belongs to another flow. Obviously, this flow would
benefit. However, the question which flow should be served next should
be made by the scheduler, not by the dropper. (In the case of
scheduler/dropper combinations, such as fq_codel.)
Best, Mario Hock
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-06 6:53 ` Sebastian Moeller
@ 2018-06-06 13:04 ` Jonathan Morton
2018-06-12 6:39 ` Dave Taht
0 siblings, 1 reply; 29+ messages in thread
From: Jonathan Morton @ 2018-06-06 13:04 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Jonathan Foulkes, bloat
>>> The rationale for that decision still is valid, at low bandwidth every opportunity to send a packet matters…
>>
>> Yes, which is why the DRR++ algorithm is used to carefully choose which flow to send a packet from.
>
> Well, but look at it that way, the longer the traversal path after the cake instance the higher the probability that the packet gets dropped by a later hop.
That's only true in case Cake is not at the bottleneck, in which case it will only have a transient queue and AQM will disengage anyway. (This assumes you're using an ack-clocked protocol, which TCP is.)
>>> …and every packet being transferred will increase the queued packets delay by its serialization delay.
>>
>> This is trivially true, but has no effect whatsoever on inter-flow induced latency, only intra-flow delay, which is already managed adequately well by an ECN-aware sender.
>
> I am not sure that I am getting your point…
Evidently. You've been following Cake development for how long, now? This is basic stuff.
> …at 0.5Mbps every full-MTU packet will hog the line foe 20+ milliseconds, so all other flows will incur at least that 20+ ms additional latency, this is independent of inter- or intra-flow perspective, no?.
At the point where the AQM drop decision is made, Cake (and fq_codel) has already decided which flow to service. On a bulk flow, most packets are the same size (a full MTU), and even if the packet delivered is the last one presently in the queue, probably another one will arrive by the time it is next serviced - so the effect of the *flow's* presence remains even into the foreseeable future.
So there is no effect on other flows' latency, only subsequent packets in the same flow - and the flow is always hurt by dropping packets, rather than marking them.
- Jonathan Morton
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] -> What other options/bufferbloat-advice ... ?
2018-06-05 6:21 ` Jonas Mårtensson
2018-06-06 4:14 ` Mikael Abrahamsson
@ 2018-06-07 12:56 ` Simon Iremonger (bufferbloat)
1 sibling, 0 replies; 29+ messages in thread
From: Simon Iremonger (bufferbloat) @ 2018-06-07 12:56 UTC (permalink / raw)
To: bloat
>> What about PLPMTU? Do you think they might tweak that too?
>> net.ipv4.tcp_mtu_probing=2
>> (despite name, applies to IPv6 too)
>
> Maybe, suggest it on their github. But I would maybe propose instead
> net.ipv4.tcp_mtu_probing=1.
OK now this needs to become *organized* now...
What about putting explanations of the above into the
bufferbloat-wiki?
https://github.com/tohojo/bufferbloat-net/
What are the risks?
What is the advantages?
Are there other flags worth changing?
Can somebody who knows more help with checking on state of BBR
congestion control?
Which of these changes, depend on particular Linux versions,which MUST
NOT be applied without a particular kernel version?
etc...
Can somebody help make pull-requests on archiving "old" areas
of the bufferbloat-wiki e.g. old-changes that aren't relevant
any more given linux 3.2+ and soforth....?
https://github.com/tohojo/bufferbloat-net/
--Simon
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-06 13:04 ` Jonathan Morton
@ 2018-06-12 6:39 ` Dave Taht
2018-06-12 6:47 ` Dave Taht
0 siblings, 1 reply; 29+ messages in thread
From: Dave Taht @ 2018-06-12 6:39 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Sebastian Moeller, bloat
"So there is no effect on other flows' latency, only subsequent
packets in the same flow - and the flow is always hurt by dropping
packets, rather than marking them."
Disagree. The flow being dropped from will reduce its rate in an rtt,
reducing the latency impact on other flows.
I regard an ideal queue length as 1 packet or aggregate, as "showing"
all flows the closest thing to the real path rtt. You want to store
packets in the path, not buffers.
ecn has mass. It is trivial to demonstrate an ecn marked flow starving
out a non-ecn flow, at low rates.
On Wed, Jun 6, 2018 at 6:04 AM, Jonathan Morton <chromatix99@gmail.com> wrote:
>>>> The rationale for that decision still is valid, at low bandwidth every opportunity to send a packet matters…
>>>
>>> Yes, which is why the DRR++ algorithm is used to carefully choose which flow to send a packet from.
>>
>> Well, but look at it that way, the longer the traversal path after the cake instance the higher the probability that the packet gets dropped by a later hop.
>
> That's only true in case Cake is not at the bottleneck, in which case it will only have a transient queue and AQM will disengage anyway. (This assumes you're using an ack-clocked protocol, which TCP is.)
>
>>>> …and every packet being transferred will increase the queued packets delay by its serialization delay.
>>>
>>> This is trivially true, but has no effect whatsoever on inter-flow induced latency, only intra-flow delay, which is already managed adequately well by an ECN-aware sender.
>>
>> I am not sure that I am getting your point…
>
> Evidently. You've been following Cake development for how long, now? This is basic stuff.
>
>> …at 0.5Mbps every full-MTU packet will hog the line foe 20+ milliseconds, so all other flows will incur at least that 20+ ms additional latency, this is independent of inter- or intra-flow perspective, no?.
>
> At the point where the AQM drop decision is made, Cake (and fq_codel) has already decided which flow to service. On a bulk flow, most packets are the same size (a full MTU), and even if the packet delivered is the last one presently in the queue, probably another one will arrive by the time it is next serviced - so the effect of the *flow's* presence remains even into the foreseeable future.
>
> So there is no effect on other flows' latency, only subsequent packets in the same flow - and the flow is always hurt by dropping packets, rather than marking them.
>
> - Jonathan Morton
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-12 6:39 ` Dave Taht
@ 2018-06-12 6:47 ` Dave Taht
2018-08-11 19:17 ` Dave Taht
0 siblings, 1 reply; 29+ messages in thread
From: Dave Taht @ 2018-06-12 6:47 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Sebastian Moeller, bloat
as for the tail loss/rto problem, doesn't happen unless we are already
in a drop state for a queue, and it doesn't happen very often, and
when it does, it seems like a good idea to me to so thoroughly back
off in the face of so much congestion.
fq_codel originally never dropped the last packet in the queue, which
led to a worst case latency of 1024 * mtu at the bandwidth. That got
fixed and I'm happy with the result. I honestly don't know what cake
does anymore except that jonathan rarely tests at real rtts where the
amount of data in the pipe is a lot more than what's sane to have
queued, where I almost always have realistic path delays.
It would be good to resolve this debate in some direction one day,
perhaps by measuring utilization > 0 on a wide range of tests.
On Mon, Jun 11, 2018 at 11:39 PM, Dave Taht <dave.taht@gmail.com> wrote:
> "So there is no effect on other flows' latency, only subsequent
> packets in the same flow - and the flow is always hurt by dropping
> packets, rather than marking them."
>
> Disagree. The flow being dropped from will reduce its rate in an rtt,
> reducing the latency impact on other flows.
>
> I regard an ideal queue length as 1 packet or aggregate, as "showing"
> all flows the closest thing to the real path rtt. You want to store
> packets in the path, not buffers.
>
> ecn has mass. It is trivial to demonstrate an ecn marked flow starving
> out a non-ecn flow, at low rates.
>
> On Wed, Jun 6, 2018 at 6:04 AM, Jonathan Morton <chromatix99@gmail.com> wrote:
>>>>> The rationale for that decision still is valid, at low bandwidth every opportunity to send a packet matters…
>>>>
>>>> Yes, which is why the DRR++ algorithm is used to carefully choose which flow to send a packet from.
>>>
>>> Well, but look at it that way, the longer the traversal path after the cake instance the higher the probability that the packet gets dropped by a later hop.
>>
>> That's only true in case Cake is not at the bottleneck, in which case it will only have a transient queue and AQM will disengage anyway. (This assumes you're using an ack-clocked protocol, which TCP is.)
>>
>>>>> …and every packet being transferred will increase the queued packets delay by its serialization delay.
>>>>
>>>> This is trivially true, but has no effect whatsoever on inter-flow induced latency, only intra-flow delay, which is already managed adequately well by an ECN-aware sender.
>>>
>>> I am not sure that I am getting your point…
>>
>> Evidently. You've been following Cake development for how long, now? This is basic stuff.
>>
>>> …at 0.5Mbps every full-MTU packet will hog the line foe 20+ milliseconds, so all other flows will incur at least that 20+ ms additional latency, this is independent of inter- or intra-flow perspective, no?.
>>
>> At the point where the AQM drop decision is made, Cake (and fq_codel) has already decided which flow to service. On a bulk flow, most packets are the same size (a full MTU), and even if the packet delivered is the last one presently in the queue, probably another one will arrive by the time it is next serviced - so the effect of the *flow's* presence remains even into the foreseeable future.
>>
>> So there is no effect on other flows' latency, only subsequent packets in the same flow - and the flow is always hurt by dropping packets, rather than marking them.
>>
>> - Jonathan Morton
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
>
> Dave Täht
> CEO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-669-226-2619
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-12 6:47 ` Dave Taht
@ 2018-08-11 19:17 ` Dave Taht
0 siblings, 0 replies; 29+ messages in thread
From: Dave Taht @ 2018-08-11 19:17 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Sebastian Moeller, bloat
In revisiting this old thread, in light of this,
https://github.com/systemd/systemd/issues/9725
and my test results of cake with and without ecn under big loads... I
feel as though I'm becoming a
pariah in favor of queue length management, by dropping packets! In
bufferbloat.net! cake used to drop ecn marked packets at overload, I'm
seeing enormous differences in queue depth w and without ecn. (On one
test at 100mbit, 10ms queues vs 30ms), more details later.
Now, some of this is that cubic tcp is just way too aggressive and
perhaps some mods to it have arrived in the last 5 years that make it
even worse... so I'm going to go do a bit of testing with osx's
implementation
in particular. The ecn responses laid out in the original rfc were
against reno, a sawtooth, against iw2, and I also think that cwnd is
not decreasing enough nowadays in the first place.
^ permalink raw reply [flat|nested] 29+ messages in thread
* [Bloat] ecn redux
2018-06-05 19:31 ` Jonathan Morton
2018-06-06 6:53 ` Sebastian Moeller
@ 2018-08-13 22:29 ` Dave Taht
1 sibling, 0 replies; 29+ messages in thread
From: Dave Taht @ 2018-08-13 22:29 UTC (permalink / raw)
To: Jonathan Morton; +Cc: Sebastian Moeller, bloat, Cake List
On Tue, Jun 5, 2018 at 12:31 PM Jonathan Morton <chromatix99@gmail.com> wrote:
>
> > On 5 Jun, 2018, at 9:34 pm, Sebastian Moeller <moeller0@gmx.de> wrote:
> >
> > The rationale for that decision still is valid, at low bandwidth every opportunity to send a packet matters…
>
> Yes, which is why the DRR++ algorithm is used to carefully choose which flow to send a packet from.
>
> > …and every packet being transferred will increase the queued packets delay by its serialization delay.
>
> This is trivially true, but has no effect whatsoever on inter-flow induced latency, only intra-flow delay, which is already managed adequately well by an ECN-aware sender.
>
> May I remind you that Cake never drops the last packet in a flow subqueue due to AQM action, but may still apply an ECN mark to it. That's because dropping a tail packet carries a risk of incurring an RTO before retransmission occurs, rather than "only" an RTT delay. Both RTO and RTT are always greater than the serialisation delay of a single packet.
>
> Which is why ECN remains valuable even on very low-bandwidth links.
I guess everybody knows at this point that I'm not a big fan of ecn.
I'd done a bit of work on making
"drop and mark" work in earlier versions of cake and completely missed
that it had got ripped out until a month or two back.
I'd like to point at this bit of codel, where it can, does, and will
do bulk dropping, and increase the drop schedule, even drop the last
packet in that queue, while overloaded, in an attempt to get things
back to the real rtt.
https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git/tree/include/net/codel_impl.h#n176
years ago, even on simple traffic, codel would spend 30% of it's time
here. On the kinds of massive overloads
for the path roland has done, it wouldn't surprise me if it was > 90%.
With ecn'd traffic, in this bit of code, we do not drain the excess
packets, nor do we increase the mark rate as frantically. I've aways
felt this was a major flaw in codel's ecn handling, and have tried to
fix it in various ways.
Even pie starts dropping ecn, when the drop probability exceeds 10%.
> - Jonathan Morton
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
2018-06-05 7:49 ` Jonathan Morton
2018-06-05 11:01 ` Mario Hock
@ 2018-08-16 21:08 ` Dave Taht
1 sibling, 0 replies; 29+ messages in thread
From: Dave Taht @ 2018-08-16 21:08 UTC (permalink / raw)
To: Jonathan Morton; +Cc: mario.hock, bloat
On Tue, Jun 5, 2018 at 12:49 AM Jonathan Morton <chromatix99@gmail.com> wrote:
>
> > On 5 Jun, 2018, at 10:44 am, Mario Hock <mario.hock@kit.edu> wrote:
> >
> > Just to make sure that I got your answer correctly. The benefit for endsystems comes from the "fq" (flow queuing) part, not from the "codel" part of fq_codel?
>
> That's a fair characterisation, yes.
>
> In fact, even for middleboxes, the "flow isolation" semantics of FQ have the most impact on reducing inter-flow induced latency. The "codel" part (AQM) helps with intra-flow latency, which is usually much less important once flow isolation is in place, but is still worth having.
So, jonathan, this portion of the debate leaked over into
https://github.com/systemd/systemd/issues/9725
And I lost a great deal of hair over it. the codel portion is way
worth it on "end-systems".
>
> - Jonathan Morton
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking
[not found] <mailman.3.1527177601.17575.bloat@lists.bufferbloat.net>
@ 2018-05-24 16:31 ` Rich Brown
0 siblings, 0 replies; 29+ messages in thread
From: Rich Brown @ 2018-05-24 16:31 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 568 bytes --]
> On May 24, 2018, at 12:00 PM, bloat-request@lists.bufferbloat.net wrote:
>
> Took 3 years after Dave approached them, but Ubuntu is finally adopting
> fq_codel as the default qdisc.
And I was sorry to have missed the SIXTH anniversary of fq_codel on Monday, 14 May 2018.
I have always been aware that engineering projects have inertia, but it's fascinating to see how that inertia plays out in the real world, even when the technology has such a potential to benefit everyone.
Happy Sixth Birthday! Take a moment to pat yourselves on the back.
Rich
[-- Attachment #2: Type: text/html, Size: 2395 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2018-08-16 21:08 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <152717340941.28154.812883711295847116.malone@soybean.canonical.com>
2018-05-24 15:38 ` [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking Jan Ceuleers
2018-06-04 11:28 ` Bless, Roland (TM)
2018-06-04 13:16 ` Jonas Mårtensson
2018-06-04 17:08 ` Dave Taht
2018-06-04 18:22 ` Jonas Mårtensson
2018-06-04 21:36 ` Jonathan Morton
2018-06-05 15:10 ` [Bloat] " Jonathan Foulkes
2018-06-05 17:24 ` Jonathan Morton
2018-06-05 18:34 ` Sebastian Moeller
2018-06-05 19:31 ` Jonathan Morton
2018-06-06 6:53 ` Sebastian Moeller
2018-06-06 13:04 ` Jonathan Morton
2018-06-12 6:39 ` Dave Taht
2018-06-12 6:47 ` Dave Taht
2018-08-11 19:17 ` Dave Taht
2018-08-13 22:29 ` [Bloat] ecn redux Dave Taht
2018-06-06 7:44 ` [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking Bless, Roland (TM)
2018-06-06 8:15 ` Sebastian Moeller
2018-06-06 8:55 ` Mario Hock
2018-06-05 0:22 ` [Bloat] Fwd: " Michael Richardson
2018-06-05 6:21 ` Jonas Mårtensson
2018-06-06 4:14 ` Mikael Abrahamsson
2018-06-07 12:56 ` [Bloat] Fwd: [Bug 1436945] -> What other options/bufferbloat-advice ... ? Simon Iremonger (bufferbloat)
2018-06-04 23:00 ` [Bloat] Fwd: [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking David Lang
2018-06-05 7:44 ` Mario Hock
2018-06-05 7:49 ` Jonathan Morton
2018-06-05 11:01 ` Mario Hock
2018-08-16 21:08 ` Dave Taht
[not found] <mailman.3.1527177601.17575.bloat@lists.bufferbloat.net>
2018-05-24 16:31 ` [Bloat] " Rich Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox