General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: "Bless, Roland (TM)" <roland.bless@kit.edu>
To: Sebastian Moeller <moeller0@gmx.de>
Cc: Erik Auerswald <auerswal@unix-ag.uni-kl.de>, bloat@lists.bufferbloat.net
Subject: Re: [Bloat] Questions for Bufferbloat Wikipedia article
Date: Wed, 7 Apr 2021 12:39:16 +0200	[thread overview]
Message-ID: <a95ac801-cc37-7ccf-db27-f0638da92c74@kit.edu> (raw)
In-Reply-To: <2C969F6A-660D-4806-8D46-5049A2E96F74@gmx.de>

Hi Sebastian,

see inline.

On 06.04.21 at 23:30 Sebastian Moeller wrote:
>> On Apr 6, 2021, at 22:01, Bless, Roland (TM) <roland.bless@kit.edu> wrote:
>>
>> Hi Sebastian,
>>
>> see comments at the end.
>>
>> On 06.04.21 at 08:31 Sebastian Moeller wrote:
>>> Hi Eric,
>>> thanks for your thoughts.
>>>> On Apr 6, 2021, at 02:47, Erik Auerswald <auerswal@unix-ag.uni-kl.de> wrote:
>>>>
>>>> Hi,
>>>>
>>>> On Mon, Apr 05, 2021 at 11:49:00PM +0200, Sebastian Moeller wrote:
>>>>> all good questions, and interesting responses so far.
>>>> I'll add some details below, I mostly concur with your responses.
>>>>
>>>>>> On Apr 5, 2021, at 14:46, Rich Brown <richb.hanover@gmail.com> wrote:
>>>>>>
>>>>>> Dave Täht has put me up to revising the current Bufferbloat article
>>>>>> on Wikipedia (https://en.wikipedia.org/wiki/Bufferbloat)
>>>>>> [...]
>>>>> [...] while too large buffers cause undesirable increase in latency
>>>>> under load (but decent throughput), [...]
>>>> With too large buffers, even throughput degrades when TCP considers
>>>> a delayed segment lost (or DNS gives up because the answers arrive
>>>> too late).  I do think there is _too_ large for buffers, period.
>>> 	Fair enough, timeouts could be changed though if required ;) but I fully concur that laergeish buffers require management to become useful ;)
>>>>> The solution basically is large buffers with adaptive management that
>>>> I would prefer the word "sufficient" instead of "large."
>>> 	If properly managed there is no upper end for the size, it might not be used though, no?
>>>>> works hard to keep latency under load increase and throughput inside
>>>>> an acceptable "corridor".
>>>> I concur that there is quite some usable range of buffer capacity when
>>>> considering the latency/throughput trade-off, and AQM seems like a good
>>>> solution to managing that.
>>> 	I fear it is the only network side mitigation technique?
>>>> My preference is to sacrifice throughput for better latency, but then
>>>> I have been bitten by too much latency quite often, but never by too
>>>> little throughput caused by small buffers.  YMMV.
>>> 	Yepp, with speedtests being the killer-application for fast end-user links (still, which is sad in itself), manufacturers and ISPs are incentivized to err on the side of too large for buffers, so the default buffering typically will not cause noticeable under-utilisation, as long as nobody wants to run single-flow speedtests over a geostationary satellite link ;). (I note that many/most speedtests silently default to test with multiple flows nowadays, with single stream tests being at least optional in some, which will reduce the expected buffering need).
>>>>> [...]
>>>>> But e.g. for traditional TCPs the amount of expected buffer needs
>>>>> increases with RTT of a flow
>>>> Does it?  Does the propagation delay provide automatic "buffering" in the
>>>> network?  Does the receiver need to advertise sufficient buffer capacity
>>>> (receive window) to allow the sender to fill the pipe?  Does the sender
>>>> need to provide sufficient buffer capacity to retransmit lost segments?
>>>> Where are buffers actually needed?
>>> 	At all those places ;) in the extreme a single packet buffer should be sufficient, but that places unrealistic high demands on the processing capabilities at all nodes of a network and does not account for anything unexpected (like another low starting). And in all cases doing things smarter can help, like pacing is better at the sender's side (with better meaning easier in the network), competent AQM is better at the bottleneck link, and at the receiver something like TCP SACK (and the required buffers to make that work) can help; all those cases work better with buffers. The catch is that buffers solve important issues while introducing new issues, that need fixing. I am sure you know all this, but spelling it out helps me to clarify my thoughts on the matter, so please just ignore if boring/old news.
>>>> I am not convinced that large buffers in the network are needed for high
>>>> throughput of high RTT TCP flows.
>>>>
>>>> See, e.g., https://people.ucsc.edu/~warner/Bufs/buffer-requirements for
>>>> some information and links to a few papers.
>> Thanks for the link Erik, but BBR is not properly described there
>> "When the RTT creeps upward -- this taken as a signal of buffer occupancy congestion" and Sebastian also mentioned: "measures the induced latency increase from those, interpreting to much latency as sign that the capacity was reached/exceeded". BBR does not use
>> delay or its gradient as congestion signal.
> 	Looking at https://queue.acm.org/detail.cfm?id=3022184, I still think that it is not completely wrong to abstractly say BBR evaluates RTT changes as function of the current sending rate to probe the bottlenecks capacity (and adjust its sending rate based on that estimated capacity), but that might either indicate I am looking at the whole thing at too abstract a level, or, as I fear, that I am simply misunderstanding BBR's principle of operation... (or both ;)) (Sidenote I keep making: for a protocol believing it knows better than to interpret all packet losses as signs of congestion, it seems rather an ovrsight not having implemented a rfc3168 style CE response...)
>
I think both, but you are in good company. Several people have 
misinterpreted how BBR actually works.
In BBRv1, the measured RTT is only used for the inflight cap CWnd of 2 BDP.
The BBR team considers delay as being a too noisy signal (see slide 10
https://www.ietf.org/proceedings/97/slides/slides-97-maprg-traffic-policing-in-the-internet-yuchung-cheng-and-neal-cardwell-00.pdf)
and therefore doesn't use it as congestion signal. Actually, BBRv1 does not
react to any congestion signal, there isn't even any backoff reaction.
BBRv2, however, reacts to packet loss (>=2%) or ECN signals.

>>> 	Thanks, I think the bandwidth delay product is still the worst case buffering required to allow 100% utilization with a single flow (a use case that at least for home links seems legit, for a back bone link probably not). But in any case if the buffers are properly managed their maximum size will not really matter, as long as it is larger than the required minimum ;)
>> Nope, a BDP-sized buffer is not required to allow 100% utilization with
>> a single flow, because it depends on the used congestion control. For
>> loss-based congestion control like Reno or Cubic, this may be true,
>> but not necessarily for other congestion controls.
> 	Yes, I should have hedged that better. For protocols like the ubiquitous TCP CUBIC (seems to be used by most major operating systems nowadays) a single flow might need BDP buffering to get close to 100% utilization. I am not wanting to say cubic is more important than other protocols, but it still represents a significant share of internet traffic. And any scheme to counter bufferbloat, could do worse than accept that reality and to allow for sufficient buffering to allow such protocols acceptable levels of utilization (all the while keeping the latency under load increase under control).
I didn't get what you were trying to say with your last sentence.
My point was that the BDP rule of thumb was tied to a specific type of 
congestion controls and
that the buffer sizing rule should probably rather reflect the burst 
absorption requirements
(see "good queue" in the Codel paper) than specifics of congestion 
controls. CC schemes
that try to counter bufferbloat typically suffer under the presence of 
loss-based congestion
controls, because they only react to loss and this requires a full 
buffer (unless an AQM is
in place), which causes queuing delay.

Regards,
  Roland




  parent reply	other threads:[~2021-04-07 10:39 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-05 12:46 Rich Brown
2021-04-05 15:13 ` Stephen Hemminger
2021-04-05 15:24   ` David Lang
2021-04-05 15:57     ` Dave Collier-Brown
2021-04-05 16:25     ` Kelvin Edmison
2021-04-05 18:00 ` [Bloat] Questions for Bufferbloat Wikipedia article - question #2 Rich Brown
2021-04-05 18:08   ` David Lang
2021-04-05 20:30     ` Erik Auerswald
2021-04-05 20:36       ` Dave Taht
2021-04-05 21:49 ` [Bloat] Questions for Bufferbloat Wikipedia article Sebastian Moeller
2021-04-05 21:55   ` Dave Taht
2021-04-06  0:47   ` Erik Auerswald
2021-04-06  6:31     ` Sebastian Moeller
2021-04-06 18:50       ` Erik Auerswald
2021-04-06 20:02         ` Bless, Roland (TM)
2021-04-06 21:59           ` Erik Auerswald
2021-04-06 23:32             ` Stephen Hemminger
2021-04-06 23:54               ` David Lang
2021-04-07 11:06             ` Bless, Roland (TM)
2021-04-27  1:41               ` Dave Taht
2021-04-27  7:25                 ` Bless, Roland (TM)
2021-04-06 20:01       ` Bless, Roland (TM)
2021-04-06 21:30         ` Sebastian Moeller
2021-04-06 21:36           ` Jonathan Morton
2021-04-07 10:39           ` Bless, Roland (TM) [this message]
2021-04-06 18:54 ` Neil Davies

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a95ac801-cc37-7ccf-db27-f0638da92c74@kit.edu \
    --to=roland.bless@kit.edu \
    --cc=auerswal@unix-ag.uni-kl.de \
    --cc=bloat@lists.bufferbloat.net \
    --cc=moeller0@gmx.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox