[Bloat] Questions for Bufferbloat Wikipedia article

Erik Auerswald auerswal at unix-ag.uni-kl.de
Tue Apr 6 17:59:53 EDT 2021


Hi,

On Tue, Apr 06, 2021 at 10:02:21PM +0200, Bless, Roland (TM) wrote:
> On 06.04.21 at 20:50 Erik Auerswald wrote:
> >On Tue, Apr 06, 2021 at 08:31:01AM +0200, Sebastian Moeller wrote:
> >>>On Apr 6, 2021, at 02:47, Erik Auerswald <auerswal at unix-ag.uni-kl.de> wrote:
> >>>On Mon, Apr 05, 2021 at 11:49:00PM +0200, Sebastian Moeller wrote:
> >>>>>On Apr 5, 2021, at 14:46, Rich Brown <richb.hanover at gmail.com> wrote:
> >>>>>
> >>>>>Dave Täht has put me up to revising the current Bufferbloat article
> >>>>>on Wikipedia (https://en.wikipedia.org/wiki/Bufferbloat)
> >>>>>[...]
> >Yes, large unmanaged buffers are at the core of the bufferbloat problem.
> 
> I disagree here: it is basically the combination
> of loss-based congestion control with unmanaged
> tail-drop buffers.

That worked for decades, then stopped working as well as before.
What changed?

Yes, there are complex interactions with how packet switched networks
are used.  Otherwise we would probably not find ourselves in the current
situation.

To me, the potential of having to wait minutes (yes, minutes!) for
the result of a key stroke over an SSH session is not worth the potential
throughput performance gain of buffers that cannot be called small.

> There are at least two solutions
> to the bufferbloat problem
> 1) better congestion control algorithms
> 2) active queue management (+fq maybe)

Both approaches aim to not use all of the available buffer space, if
there are unreasonably large buffers, i.e., they aim to not build a
large standing queue.

> [...]
> Small buffers definitely limit the queuing delay as well as
> jitter. However, how much performance is potentially lost due to
> the small buffer depends a lot on the arrival distribution.

Could the better congestion control algorithms avoid the potential
performance loss by not requiring large buffers for high throughput?
Might small buffers incentivise to not send huge bursts of data and hope
for the best?

FQ with AQM aims to allow the absorption of large traffic bursts (i.e.,
use of large buffers) without affecting _other_ flows too much.

I would consider the combination of FQ+AQM, better congestion control
algorithms, and large buffers as an optimization, but using just large
buffers without any of the other two approaches as a mistake currently
called bufferbloat.  As such I see large unmanaged buffers at the core
of the bufferbloat problem.

FQ+AQM for every large buffer may solve the bufferbloat problem by
attacking the "unmanaged" part of the problem.  Small buffers may solve
it by attacking the "large" part of the problem.  Small buffers may
bring their own share of problems, but IMHO those are much less than
those of bufferbloat.

I do not see TCP congestion control improvements, even combining
sender-side improvements with receiver-side methods as in rLEDBAT[0],
as a solution to bufferbloat, but rather as a mitigation.

[0] https://datatracker.ietf.org/doc/draft-irtf-iccrg-rledbat/

Anyway, I think it is obvious that I am willing to sacrifice more
throughput for better latency than others.

Thanks,
Erik
-- 
Simplicity is prerequisite for reliability.
                        -- Edsger W. Dijkstra


More information about the Bloat mailing list