[Bloat] when does the CoDel part of fq_codel help in the real world?
Bless, Roland (TM)
roland.bless at kit.edu
Tue Nov 27 06:43:49 EST 2018
Hi,
Am 27.11.18 um 12:40 schrieb Bless, Roland (TM):
> Hi Luca,
>
> Am 27.11.18 um 11:40 schrieb Luca Muscariello:
>> OK. We agree.
>> That's correct, you need *at least* the BDP in flight so that the
>> bottleneck queue never empties out.
>
> No, that's not what I meant, but it's quite simple.
> You need: data min_inflight=2 * RTTmin * bottleneck_rate to filly
Sorry, it's meant to be: min_inflight= RTTmin * bottleneck_rate
Regards,
Roland
> utilize the bottleneck link.
> If this is true, the bottleneck queue will be empty. If your amount
> of inflight data is larger, the bottleneck queue buffer will store
> the excess packets. With just min_inflight there will be no
> bottleneck queue, the packets are "on the wire".
>
>> This can be easily proven using fluid models for any congestion
>> controlled source no matter if it is
>> loss-based, delay-based, rate-based, formula-based etc.
>>
>> A highly paced source gives you the ability to get as close as
>> theoretically possible to the BDP+epsilon
>> as possible.
>
> Yep, but that BDP is "on the wire" and epsilon will be in the bottleneck
> buffer.
>
>> link fully utilized is defined as Q>0 unless you don't include the
>> packet currently being transmitted. I do,
>> so the TXtteer is never idle. But that's a detail.
>
> I wouldn't define link fully utilized as Q>0, but if Q>0 then
> the link is fully utilized (that's what I meant by the direction
> of implication).
>
> Rgards,
> Roland
>
>>
>>
>> On Tue, Nov 27, 2018 at 11:35 AM Bless, Roland (TM)
>> <roland.bless at kit.edu <mailto:roland.bless at kit.edu>> wrote:
>>
>> Hi,
>>
>> Am 27.11.18 um 11:29 schrieb Luca Muscariello:
>> > I have never said that you need to fill the buffer to the max size to
>> > get full capacity, which is an absurdity.
>>
>> Yes, it's absurd, but that's what today's loss-based CC algorithms do.
>>
>> > I said you need at least the BDP so that the queue never empties out.
>> > The link is fully utilized IFF the queue is never emptied.
>>
>> I was also a bit imprecise: you'll need a BDP in flight, but
>> you don't need to fill the buffer at all. The latter sentence
>> is valid only in the direction: queue not empty -> link fully utilized.
>>
>> Regards,
>> Roland
>>
>> >
>> >
>> >
>> > On Tue 27 Nov 2018 at 11:26, Bless, Roland (TM)
>> <roland.bless at kit.edu <mailto:roland.bless at kit.edu>
>> > <mailto:roland.bless at kit.edu <mailto:roland.bless at kit.edu>>> wrote:
>> >
>> > Hi Luca,
>> >
>> > Am 27.11.18 um 10:24 schrieb Luca Muscariello:
>> > > A congestion controlled protocol such as TCP or others,
>> including
>> > QUIC,
>> > > LEDBAT and so on
>> > > need at least the BDP in the transmission queue to get full link
>> > > efficiency, i.e. the queue never empties out.
>> >
>> > This is not true. There are congestion control algorithms
>> > (e.g., TCP LoLa [1] or BBRv2) that can fully utilize the
>> bottleneck link
>> > capacity without filling the buffer to its maximum capacity.
>> The BDP
>> > rule of thumb basically stems from the older loss-based congestion
>> > control variants that profit from the standing queue that they
>> built
>> > over time when they detect a loss:
>> > while they back-off and stop sending, the queue keeps the
>> bottleneck
>> > output busy and you'll not see underutilization of the link.
>> Moreover,
>> > once you get good loss de-synchronization, the buffer size
>> requirement
>> > for multiple long-lived flows decreases.
>> >
>> > > This gives rule of thumbs to size buffers which is also very
>> practical
>> > > and thanks to flow isolation becomes very accurate.
>> >
>> > The positive effect of buffers is merely their role to absorb
>> > short-term bursts (i.e., mismatch in arrival and departure rates)
>> > instead of dropping packets. One does not need a big buffer to
>> > fully utilize a link (with perfect knowledge you can keep the link
>> > saturated even without a single packet waiting in the buffer).
>> > Furthermore, large buffers (e.g., using the BDP rule of thumb)
>> > are not useful/practical anymore at very high speed such as
>> 100 Gbit/s:
>> > memory is also quite costly at such high speeds...
>> >
>> > Regards,
>> > Roland
>> >
>> > [1] M. Hock, F. Neumeister, M. Zitterbart, R. Bless.
>> > TCP LoLa: Congestion Control for Low Latencies and High
>> Throughput.
>> > Local Computer Networks (LCN), 2017 IEEE 42nd Conference on, pp.
>> > 215-218, Singapore, Singapore, October 2017
>> > http://doc.tm.kit.edu/2017-LCN-lola-paper-authors-copy.pdf
>> >
>> > > Which is:
>> > >
>> > > 1) find a way to keep the number of backlogged flows at a
>> > reasonable value.
>> > > This largely depends on the minimum fair rate an application may
>> > need in
>> > > the long term.
>> > > We discussed a little bit of available mechanisms to achieve
>> that
>> > in the
>> > > literature.
>> > >
>> > > 2) fix the largest RTT you want to serve at full utilization
>> and size
>> > > the buffer using BDP * N_backlogged.
>> > > Or the other way round: check how much memory you can use
>> > > in the router/line card/device and for a fixed N, compute
>> the largest
>> > > RTT you can serve at full utilization.
>> > >
>> > > 3) there is still some memory to dimension for sparse flows in
>> > addition
>> > > to that, but this is not based on BDP.
>> > > It is just enough to compute the total utilization of sparse
>> flows and
>> > > use the same simple model Toke has used
>> > > to compute the (de)prioritization probability.
>> > >
>> > > This procedure would allow to size FQ_codel but also SFQ.
>> > > It would be interesting to compare the two under this buffer
>> sizing.
>> > > It would also be interesting to compare another mechanism
>> that we have
>> > > mentioned during the defense
>> > > which is AFD + a sparse flow queue. Which is, BTW, already
>> > available in
>> > > Cisco nexus switches for data centres.
>> > >
>> > > I think that the the codel part would still provide the ECN
>> feature,
>> > > that all the others cannot have.
>> > > However the others, the last one especially can be
>> implemented in
>> > > silicon with reasonable cost.
>> >
>>
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
More information about the Bloat
mailing list