From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt1-x82b.google.com (mail-qt1-x82b.google.com [IPv6:2607:f8b0:4864:20::82b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 12F6D3B29E for ; Tue, 27 Nov 2018 05:29:39 -0500 (EST) Received: by mail-qt1-x82b.google.com with SMTP id v11so21074946qtc.2 for ; Tue, 27 Nov 2018 02:29:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=8ikzaPNxpFi8MPge20qD9gc7oY/vzcrsoif7dXnUBQg=; b=WtdmkmJB4BjffwpSDdPxEHWOXThTFXmCOlD3F6FbTFEUknzWfDlHfTFv03eTWTv/Tx vgb3JiJzBp26naYPrw0TyXCwq1BkWZc66zTTQFMMXqzc4cNqvHyNAl7H7m3/Iz87HPcW bK5gdUlva4IYztiv1HE0bc6cNfJl1yQX7UPgotOJJH/SodpgSJ40fYswt4AARdHFj57i vFJR28i84fFl3LvvNEO9utMhkoDoIdG9+rFEaRmw7uj4vchV6o8sXeTAH0SxUIZItY2L FTIFqLwI9g1RQh1HZIIuKvje8FKB9x7gaC7iVWypgGNmND87qZ30aU9JSOxFYn0TvOPx uCDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=8ikzaPNxpFi8MPge20qD9gc7oY/vzcrsoif7dXnUBQg=; b=c8KPESibb0LrYUf77PnWoSefbmGwTLvv++35PdGdH2qVODzCyWkXzYjerp22elI0MS F77H7a6ilhEmeK0/njYCI0Pi3O2uTFzbzzDobdMZjzL1E0udlaGLRzp5cx8RNx2nNZSJ Mk7XkhO+jerQeOsmDuThYEhdBOGmmTuQ3U3r64G3XQ88Wfhwq0dONf0gGlLAKeCdjkTZ kRCTD2Qkb5wZEa0r6klFzQYPbmubM4wNkFil6kDGQedqx4JjVVie8fXjtrqUV6x0tgU4 PBqsKYnt+oiBQf8cUB9wtXofBJ5lGSkRLL7LLhv3aYqDlC7NPSNFDgk40zaX3Ou6fJPa poqA== X-Gm-Message-State: AGRZ1gJC+bKjG397QnWvjbek0rk57WFdT8f6jKj7xwi8ghJnIM5cfhmu eWa8j4fJlb2llljmLmlLE+3sTFf/55E4A+rw89c= X-Google-Smtp-Source: AJdET5fb9YIRvN0oXJgcEKfUF3Az6nOqIfhPpgLKkLN3itRHXzuXgrWpoKql8vqvkPP2RWqPOAFs8nwdhD7VeXFboOc= X-Received: by 2002:aed:2946:: with SMTP id s64mr29980115qtd.383.1543314578519; Tue, 27 Nov 2018 02:29:38 -0800 (PST) MIME-Version: 1.0 References: <65EAC6C1-4688-46B6-A575-A6C7F2C066C5@heistp.net> In-Reply-To: From: Luca Muscariello Date: Tue, 27 Nov 2018 11:29:27 +0100 Message-ID: To: "Bless, Roland (TM)" Cc: Jonathan Morton , bloat Content-Type: multipart/alternative; boundary="0000000000003abbc9057ba2ed41" Subject: Re: [Bloat] when does the CoDel part of fq_codel help in the real world? X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 Nov 2018 10:29:39 -0000 --0000000000003abbc9057ba2ed41 Content-Type: text/plain; charset="UTF-8" I have never said that you need to fill the buffer to the max size to get full capacity, which is an absurdity. I said you need at least the BDP so that the queue never empties out. The link is fully utilized IFF the queue is never emptied. On Tue 27 Nov 2018 at 11:26, Bless, Roland (TM) wrote: > Hi Luca, > > Am 27.11.18 um 10:24 schrieb Luca Muscariello: > > A congestion controlled protocol such as TCP or others, including QUIC, > > LEDBAT and so on > > need at least the BDP in the transmission queue to get full link > > efficiency, i.e. the queue never empties out. > > This is not true. There are congestion control algorithms > (e.g., TCP LoLa [1] or BBRv2) that can fully utilize the bottleneck link > capacity without filling the buffer to its maximum capacity. The BDP > rule of thumb basically stems from the older loss-based congestion > control variants that profit from the standing queue that they built > over time when they detect a loss: > while they back-off and stop sending, the queue keeps the bottleneck > output busy and you'll not see underutilization of the link. Moreover, > once you get good loss de-synchronization, the buffer size requirement > for multiple long-lived flows decreases. > > > This gives rule of thumbs to size buffers which is also very practical > > and thanks to flow isolation becomes very accurate. > > The positive effect of buffers is merely their role to absorb > short-term bursts (i.e., mismatch in arrival and departure rates) > instead of dropping packets. One does not need a big buffer to > fully utilize a link (with perfect knowledge you can keep the link > saturated even without a single packet waiting in the buffer). > Furthermore, large buffers (e.g., using the BDP rule of thumb) > are not useful/practical anymore at very high speed such as 100 Gbit/s: > memory is also quite costly at such high speeds... > > Regards, > Roland > > [1] M. Hock, F. Neumeister, M. Zitterbart, R. Bless. > TCP LoLa: Congestion Control for Low Latencies and High Throughput. > Local Computer Networks (LCN), 2017 IEEE 42nd Conference on, pp. > 215-218, Singapore, Singapore, October 2017 > http://doc.tm.kit.edu/2017-LCN-lola-paper-authors-copy.pdf > > > Which is: > > > > 1) find a way to keep the number of backlogged flows at a reasonable > value. > > This largely depends on the minimum fair rate an application may need in > > the long term. > > We discussed a little bit of available mechanisms to achieve that in the > > literature. > > > > 2) fix the largest RTT you want to serve at full utilization and size > > the buffer using BDP * N_backlogged. > > Or the other way round: check how much memory you can use > > in the router/line card/device and for a fixed N, compute the largest > > RTT you can serve at full utilization. > > > > 3) there is still some memory to dimension for sparse flows in addition > > to that, but this is not based on BDP. > > It is just enough to compute the total utilization of sparse flows and > > use the same simple model Toke has used > > to compute the (de)prioritization probability. > > > > This procedure would allow to size FQ_codel but also SFQ. > > It would be interesting to compare the two under this buffer sizing. > > It would also be interesting to compare another mechanism that we have > > mentioned during the defense > > which is AFD + a sparse flow queue. Which is, BTW, already available in > > Cisco nexus switches for data centres. > > > > I think that the the codel part would still provide the ECN feature, > > that all the others cannot have. > > However the others, the last one especially can be implemented in > > silicon with reasonable cost. > --0000000000003abbc9057ba2ed41 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I have never said that you need to fill the buffer t= o the max size to get full capacity, which is an absurdity.

I said you need at least the BDP = so that the queue never empties out.
The link is ful= ly utilized IFF the queue is never emptied.



On Tue 27 Nov 2018 at 11:26, Bless, Rolan= d (TM) <roland.bless@kit.edu= > wrote:
Hi Luca,

Am 27.11.18 um 10:24 schrieb Luca Muscariello:
> A congestion controlled protocol such as TCP or others, including QUIC= ,
> LEDBAT and so on
> need at least the BDP in the transmission queue to get full link
> efficiency, i.e. the queue never empties out.

This is not true. There are congestion control algorithms
(e.g., TCP LoLa [1] or BBRv2) that can fully utilize the bottleneck link capacity without filling the buffer to its maximum capacity. The BDP
rule of thumb basically stems from the older loss-based congestion
control variants that profit from the standing queue that they built
over time when they detect a loss:
while they back-off and stop sending, the queue keeps the bottleneck
output busy and you'll not see underutilization of the link. Moreover,<= br> once you get good loss de-synchronization, the buffer size requirement
for multiple long-lived flows decreases.

> This gives rule of thumbs to size buffers which is also very practical=
> and thanks to flow isolation becomes very accurate.

The positive effect of buffers is merely their role to absorb
short-term bursts (i.e., mismatch in arrival and departure rates)
instead of dropping packets. One does not need a big buffer to
fully utilize a link (with perfect knowledge you can keep the link
saturated even without a single packet waiting in the buffer).
Furthermore, large buffers (e.g., using the BDP rule of thumb)
are not useful/practical anymore at very high speed such as 100 Gbit/s:
memory is also quite costly at such high speeds...

Regards,
=C2=A0Roland

[1] M. Hock, F. Neumeister, M. Zitterbart, R. Bless.
TCP LoLa: Congestion Control for Low Latencies and High Throughput.
Local Computer Networks (LCN), 2017 IEEE 42nd Conference on, pp.
215-218, Singapore, Singapore, October 2017
http://doc.tm.kit.edu/2017-LCN-lola-paper= -authors-copy.pdf

> Which is:=C2=A0
>
> 1) find a way to keep the number of backlogged flows at a reasonable v= alue.=C2=A0
> This largely depends on the minimum fair rate an application may need = in
> the long term.
> We discussed a little bit of available mechanisms to achieve that in t= he
> literature.
>
> 2) fix the largest RTT you want to serve at full utilization and size<= br> > the buffer using BDP * N_backlogged.=C2=A0=C2=A0
> Or the other way round: check how much memory you can use=C2=A0
> in the router/line card/device and for a fixed N, compute the largest<= br> > RTT you can serve at full utilization.=C2=A0
>
> 3) there is still some memory to dimension for sparse flows in additio= n
> to that, but this is not based on BDP.=C2=A0
> It is just enough to compute the total utilization of sparse flows and=
> use the same simple model Toke has used=C2=A0
> to compute the (de)prioritization probability.
>
> This procedure would allow to size FQ_codel but also SFQ.
> It would be interesting to compare the two under this buffer sizing.= =C2=A0
> It would also be interesting to compare another mechanism that we have=
> mentioned during the defense
> which is AFD=C2=A0+ a sparse flow queue. Which is, BTW, already availa= ble in
> Cisco nexus switches for data centres.
>
> I think that the the codel part would still provide the ECN feature, > that all the others cannot have.
> However the others, the last one especially can be implemented in
> silicon with reasonable cost.
--0000000000003abbc9057ba2ed41--