From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com [IPv6:2607:f8b0:4864:20::729]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 869B53CB5E for ; Thu, 29 Nov 2018 11:09:21 -0500 (EST) Received: by mail-qk1-x729.google.com with SMTP id r71so1347929qkr.10 for ; Thu, 29 Nov 2018 08:09:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=omfitrTFd7w81POKGu6T9E3uu9pV4OZS4l9qN17m938=; b=k9E0AHTXOioNfuxp5HC7l7e4ENhaK53Zf6y7ZS3NzewMdu0mOBby4Qz1IzR1zcajTS 6Bv6Cmowmw/0E4+tlqHGYXSgoyl60ujeDoJa3vCRiAl9S62bf0fCeCrkEJQw3w2V+J8J IXoU3yEGaE6s8PnZZgnHUBqfzK+VSBsaUUjPm2WiEAtw8QNhQxpnXAXhrleC0akvZwco wnLEej/h3/qWabxlh6NXuDPnTn4hz3UkgbnjkiPCTObrcHSRIP88Y+LeLoOnuWaSYHUI /gix7/Ob35NI7G7VuSf6+fr6RgAfjcUwjYKbhFLeUisqLCFbeNwvj1q6PqwAyiwDEU43 M1mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=omfitrTFd7w81POKGu6T9E3uu9pV4OZS4l9qN17m938=; b=boFC5ktMTBlsKxOtRUoxnDsYX4HXz3g+z9ZSdKafodzpAfrpmFHLtQn4+ldM9BG1Jb uAfN4Nt/YYrlYp6MEkoyNRsnV5ssPKXt9REZNyNL47zRAk0oOGRjdmuq2LtaUB4EH39u hhDnOlsKBuBjTVA+0Wk2vDHqiUJRUiQHF9ZReJNN9igINu2Od2NjvILDAxzPX54+5epg I+mSWaQIT0S8QGxvLtZWU/RuU8TkV2QOMsPkp+RNKTDVkdCiN2DTlqGWIwC8y7UFLt5H 2ivHOE6ggGv+yakvzcjSE3bV47U9dLPjKRbsPl2v/NJi0+EUyrMlIy/LiDIPf2FA4EaE zKGA== X-Gm-Message-State: AA+aEWZ7Lyu9vCQ8aC9ixWOKK1pkyV2MV5GXHztVBfoFrGodnHPsR3Xm BdjKkBzmsnqIDQGy3xCCPJE6kLpQ6oJ4FcgHefs= X-Google-Smtp-Source: AFSGD/UkDkMdpJZtmcu8hR0rHPnKwpg9el+4M6u28TqFS73NTJ5EzMo3u4ySaJF7yK15al+CY3slvLrJMPJ19O1ZO3Q= X-Received: by 2002:a37:9f87:: with SMTP id i129mr1758587qke.255.1543507761042; Thu, 29 Nov 2018 08:09:21 -0800 (PST) MIME-Version: 1.0 References: <65EAC6C1-4688-46B6-A575-A6C7F2C066C5@heistp.net> In-Reply-To: From: Luca Muscariello Date: Thu, 29 Nov 2018 17:09:07 +0100 Message-ID: To: "Bless, Roland (TM)" Cc: Jonathan Morton , bloat Content-Type: multipart/alternative; boundary="000000000000ce1987057bcfe791" Subject: Re: [Bloat] when does the CoDel part of fq_codel help in the real world? X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 Nov 2018 16:09:21 -0000 --000000000000ce1987057bcfe791 Content-Type: text/plain; charset="UTF-8" Hi Roland, It took me quite a lot of time to find this message in the thread... I read the paper you sent and I guess this is the first of a series as many things stay uncovered. Just a quick question: why is X(t) always increasing with t? On Tue, Nov 27, 2018 at 11:26 AM Bless, Roland (TM) wrote: > Hi Luca, > > Am 27.11.18 um 10:24 schrieb Luca Muscariello: > > A congestion controlled protocol such as TCP or others, including QUIC, > > LEDBAT and so on > > need at least the BDP in the transmission queue to get full link > > efficiency, i.e. the queue never empties out. > > This is not true. There are congestion control algorithms > (e.g., TCP LoLa [1] or BBRv2) that can fully utilize the bottleneck link > capacity without filling the buffer to its maximum capacity. The BDP > rule of thumb basically stems from the older loss-based congestion > control variants that profit from the standing queue that they built > over time when they detect a loss: > while they back-off and stop sending, the queue keeps the bottleneck > output busy and you'll not see underutilization of the link. Moreover, > once you get good loss de-synchronization, the buffer size requirement > for multiple long-lived flows decreases. > > > This gives rule of thumbs to size buffers which is also very practical > > and thanks to flow isolation becomes very accurate. > > The positive effect of buffers is merely their role to absorb > short-term bursts (i.e., mismatch in arrival and departure rates) > instead of dropping packets. One does not need a big buffer to > fully utilize a link (with perfect knowledge you can keep the link > saturated even without a single packet waiting in the buffer). > Furthermore, large buffers (e.g., using the BDP rule of thumb) > are not useful/practical anymore at very high speed such as 100 Gbit/s: > memory is also quite costly at such high speeds... > > Regards, > Roland > > [1] M. Hock, F. Neumeister, M. Zitterbart, R. Bless. > TCP LoLa: Congestion Control for Low Latencies and High Throughput. > Local Computer Networks (LCN), 2017 IEEE 42nd Conference on, pp. > 215-218, Singapore, Singapore, October 2017 > http://doc.tm.kit.edu/2017-LCN-lola-paper-authors-copy.pdf > > > Which is: > > > > 1) find a way to keep the number of backlogged flows at a reasonable > value. > > This largely depends on the minimum fair rate an application may need in > > the long term. > > We discussed a little bit of available mechanisms to achieve that in the > > literature. > > > > 2) fix the largest RTT you want to serve at full utilization and size > > the buffer using BDP * N_backlogged. > > Or the other way round: check how much memory you can use > > in the router/line card/device and for a fixed N, compute the largest > > RTT you can serve at full utilization. > > > > 3) there is still some memory to dimension for sparse flows in addition > > to that, but this is not based on BDP. > > It is just enough to compute the total utilization of sparse flows and > > use the same simple model Toke has used > > to compute the (de)prioritization probability. > > > > This procedure would allow to size FQ_codel but also SFQ. > > It would be interesting to compare the two under this buffer sizing. > > It would also be interesting to compare another mechanism that we have > > mentioned during the defense > > which is AFD + a sparse flow queue. Which is, BTW, already available in > > Cisco nexus switches for data centres. > > > > I think that the the codel part would still provide the ECN feature, > > that all the others cannot have. > > However the others, the last one especially can be implemented in > > silicon with reasonable cost. > --000000000000ce1987057bcfe791 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Roland,

It took me quite a lot of ti= me to find this message in the thread...=C2=A0
I read the paper y= ou sent and I guess this is the first of a series as many things stay uncov= ered.

Just a quick question: why is X(t) alway= s increasing with=C2=A0 t?


On Tue, Nov 27, 2018 at 11:26 AM Bless, Roland (T= M) <roland.bless@kit.edu>= wrote:
Hi Luca,

Am 27.11.18 um 10:24 schrieb Luca Muscariello:
> A congestion controlled protocol such as TCP or others, including QUIC= ,
> LEDBAT and so on
> need at least the BDP in the transmission queue to get full link
> efficiency, i.e. the queue never empties out.

This is not true. There are congestion control algorithms
(e.g., TCP LoLa [1] or BBRv2) that can fully utilize the bottleneck link capacity without filling the buffer to its maximum capacity. The BDP
rule of thumb basically stems from the older loss-based congestion
control variants that profit from the standing queue that they built
over time when they detect a loss:
while they back-off and stop sending, the queue keeps the bottleneck
output busy and you'll not see underutilization of the link. Moreover,<= br> once you get good loss de-synchronization, the buffer size requirement
for multiple long-lived flows decreases.

> This gives rule of thumbs to size buffers which is also very practical=
> and thanks to flow isolation becomes very accurate.

The positive effect of buffers is merely their role to absorb
short-term bursts (i.e., mismatch in arrival and departure rates)
instead of dropping packets. One does not need a big buffer to
fully utilize a link (with perfect knowledge you can keep the link
saturated even without a single packet waiting in the buffer).
Furthermore, large buffers (e.g., using the BDP rule of thumb)
are not useful/practical anymore at very high speed such as 100 Gbit/s:
memory is also quite costly at such high speeds...

Regards,
=C2=A0Roland

[1] M. Hock, F. Neumeister, M. Zitterbart, R. Bless.
TCP LoLa: Congestion Control for Low Latencies and High Throughput.
Local Computer Networks (LCN), 2017 IEEE 42nd Conference on, pp.
215-218, Singapore, Singapore, October 2017
http://doc.tm.kit.edu/2017-LCN-lola-paper= -authors-copy.pdf

> Which is:=C2=A0
>
> 1) find a way to keep the number of backlogged flows at a reasonable v= alue.=C2=A0
> This largely depends on the minimum fair rate an application may need = in
> the long term.
> We discussed a little bit of available mechanisms to achieve that in t= he
> literature.
>
> 2) fix the largest RTT you want to serve at full utilization and size<= br> > the buffer using BDP * N_backlogged.=C2=A0=C2=A0
> Or the other way round: check how much memory you can use=C2=A0
> in the router/line card/device and for a fixed N, compute the largest<= br> > RTT you can serve at full utilization.=C2=A0
>
> 3) there is still some memory to dimension for sparse flows in additio= n
> to that, but this is not based on BDP.=C2=A0
> It is just enough to compute the total utilization of sparse flows and=
> use the same simple model Toke has used=C2=A0
> to compute the (de)prioritization probability.
>
> This procedure would allow to size FQ_codel but also SFQ.
> It would be interesting to compare the two under this buffer sizing.= =C2=A0
> It would also be interesting to compare another mechanism that we have=
> mentioned during the defense
> which is AFD=C2=A0+ a sparse flow queue. Which is, BTW, already availa= ble in
> Cisco nexus switches for data centres.
>
> I think that the the codel part would still provide the ECN feature, > that all the others cannot have.
> However the others, the last one especially can be implemented in
> silicon with reasonable cost.
--000000000000ce1987057bcfe791--