From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from iramx2.ira.uni-karlsruhe.de (iramx2.ira.uni-karlsruhe.de [IPv6:2a00:1398:2::10:81]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 598E23B29E for ; Wed, 7 Apr 2021 06:39:19 -0400 (EDT) Received: from i72vorta.tm.uni-karlsruhe.de ([141.3.71.26] helo=i72vorta.tm.kit.edu) by iramx2.ira.uni-karlsruhe.de with esmtpsa port 25 iface 141.3.10.8 id 1lU5af-0007rN-0n; Wed, 07 Apr 2021 12:39:17 +0200 Received: from [IPv6:::1] (localhost [127.0.0.1]) by i72vorta.tm.kit.edu (Postfix) with ESMTPS id B837B420005; Wed, 7 Apr 2021 12:39:16 +0200 (CEST) To: Sebastian Moeller Cc: Erik Auerswald , bloat@lists.bufferbloat.net References: <9A233C8C-5C48-4483-A087-AA5FE1011388@gmail.com> <320ECDF6-8B03-4995-944B-4726B866B2D3@gmx.de> <20210406004735.GA16266@unix-ag.uni-kl.de> <31F4A461-F3D6-42FF-B6A5-33F4824F97BC@gmx.de> <033d63b1-8c7e-751a-5768-96ffcd825fce@kit.edu> <2C969F6A-660D-4806-8D46-5049A2E96F74@gmx.de> From: "Bless, Roland (TM)" Organization: Institute of Telematics, Karlsruhe Institute of Technology (KIT) Message-ID: Date: Wed, 7 Apr 2021 12:39:16 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: <2C969F6A-660D-4806-8D46-5049A2E96F74@gmx.de> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: quoted-printable Content-Language: en-US X-ATIS-AV: ClamAV (iramx2.ira.uni-karlsruhe.de) X-ATIS-Checksum: v3zoCAcc32ckk X-ATIS-Timestamp: iramx2.ira.uni-karlsruhe.de esmtpsa 1617791957.131781379 Subject: Re: [Bloat] Questions for Bufferbloat Wikipedia article X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Apr 2021 10:39:19 -0000 Hi Sebastian, see inline. On 06.04.21 at 23:30 Sebastian Moeller wrote: >> On Apr 6, 2021, at 22:01, Bless, Roland (TM) wr= ote: >> >> Hi Sebastian, >> >> see comments at the end. >> >> On 06.04.21 at 08:31 Sebastian Moeller wrote: >>> Hi Eric, >>> thanks for your thoughts. >>>> On Apr 6, 2021, at 02:47, Erik Auerswald wrote: >>>> >>>> Hi, >>>> >>>> On Mon, Apr 05, 2021 at 11:49:00PM +0200, Sebastian Moeller wrote: >>>>> all good questions, and interesting responses so far. >>>> I'll add some details below, I mostly concur with your responses. >>>> >>>>>> On Apr 5, 2021, at 14:46, Rich Brown wro= te: >>>>>> >>>>>> Dave T=C3=A4ht has put me up to revising the current Bufferbloat a= rticle >>>>>> on Wikipedia (https://en.wikipedia.org/wiki/Bufferbloat) >>>>>> [...] >>>>> [...] while too large buffers cause undesirable increase in latency= >>>>> under load (but decent throughput), [...] >>>> With too large buffers, even throughput degrades when TCP considers >>>> a delayed segment lost (or DNS gives up because the answers arrive >>>> too late). I do think there is _too_ large for buffers, period. >>> Fair enough, timeouts could be changed though if required ;) but I f= ully concur that laergeish buffers require management to become useful ;)= >>>>> The solution basically is large buffers with adaptive management th= at >>>> I would prefer the word "sufficient" instead of "large." >>> If properly managed there is no upper end for the size, it might not= be used though, no? >>>>> works hard to keep latency under load increase and throughput insid= e >>>>> an acceptable "corridor". >>>> I concur that there is quite some usable range of buffer capacity wh= en >>>> considering the latency/throughput trade-off, and AQM seems like a g= ood >>>> solution to managing that. >>> I fear it is the only network side mitigation technique? >>>> My preference is to sacrifice throughput for better latency, but the= n >>>> I have been bitten by too much latency quite often, but never by too= >>>> little throughput caused by small buffers. YMMV. >>> Yepp, with speedtests being the killer-application for fast end-user= links (still, which is sad in itself), manufacturers and ISPs are incent= ivized to err on the side of too large for buffers, so the default buffer= ing typically will not cause noticeable under-utilisation, as long as nob= ody wants to run single-flow speedtests over a geostationary satellite li= nk ;). (I note that many/most speedtests silently default to test with mu= ltiple flows nowadays, with single stream tests being at least optional i= n some, which will reduce the expected buffering need). >>>>> [...] >>>>> But e.g. for traditional TCPs the amount of expected buffer needs >>>>> increases with RTT of a flow >>>> Does it? Does the propagation delay provide automatic "buffering" i= n the >>>> network? Does the receiver need to advertise sufficient buffer capa= city >>>> (receive window) to allow the sender to fill the pipe? Does the sen= der >>>> need to provide sufficient buffer capacity to retransmit lost segmen= ts? >>>> Where are buffers actually needed? >>> At all those places ;) in the extreme a single packet buffer should = be sufficient, but that places unrealistic high demands on the processing= capabilities at all nodes of a network and does not account for anything= unexpected (like another low starting). And in all cases doing things sm= arter can help, like pacing is better at the sender's side (with better m= eaning easier in the network), competent AQM is better at the bottleneck = link, and at the receiver something like TCP SACK (and the required buffe= rs to make that work) can help; all those cases work better with buffers.= The catch is that buffers solve important issues while introducing new i= ssues, that need fixing. I am sure you know all this, but spelling it out= helps me to clarify my thoughts on the matter, so please just ignore if = boring/old news. >>>> I am not convinced that large buffers in the network are needed for = high >>>> throughput of high RTT TCP flows. >>>> >>>> See, e.g., https://people.ucsc.edu/~warner/Bufs/buffer-requirements = for >>>> some information and links to a few papers. >> Thanks for the link Erik, but BBR is not properly described there >> "When the RTT creeps upward -- this taken as a signal of buffer occupa= ncy congestion" and Sebastian also mentioned: "measures the induced laten= cy increase from those, interpreting to much latency as sign that the cap= acity was reached/exceeded". BBR does not use >> delay or its gradient as congestion signal. > Looking at https://queue.acm.org/detail.cfm?id=3D3022184, I still thin= k that it is not completely wrong to abstractly say BBR evaluates RTT cha= nges as function of the current sending rate to probe the bottlenecks cap= acity (and adjust its sending rate based on that estimated capacity), but= that might either indicate I am looking at the whole thing at too abstra= ct a level, or, as I fear, that I am simply misunderstanding BBR's princi= ple of operation... (or both ;)) (Sidenote I keep making: for a protocol = believing it knows better than to interpret all packet losses as signs of= congestion, it seems rather an ovrsight not having implemented a rfc3168= style CE response...) > I think both, but you are in good company. Several people have=20 misinterpreted how BBR actually works. In BBRv1, the measured RTT is only used for the inflight cap CWnd of 2 BD= P. The BBR team considers delay as being a too noisy signal (see slide 10 https://www.ietf.org/proceedings/97/slides/slides-97-maprg-traffic-polici= ng-in-the-internet-yuchung-cheng-and-neal-cardwell-00.pdf) and therefore doesn't use it as congestion signal. Actually, BBRv1 does n= ot react to any congestion signal, there isn't even any backoff reaction. BBRv2, however, reacts to packet loss (>=3D2%) or ECN signals. >>> Thanks, I think the bandwidth delay product is still the worst case = buffering required to allow 100% utilization with a single flow (a use ca= se that at least for home links seems legit, for a back bone link probabl= y not). But in any case if the buffers are properly managed their maximum= size will not really matter, as long as it is larger than the required m= inimum ;) >> Nope, a BDP-sized buffer is not required to allow 100% utilization wit= h >> a single flow, because it depends on the used congestion control. For >> loss-based congestion control like Reno or Cubic, this may be true, >> but not necessarily for other congestion controls. > Yes, I should have hedged that better. For protocols like the ubiquito= us TCP CUBIC (seems to be used by most major operating systems nowadays) = a single flow might need BDP buffering to get close to 100% utilization. = I am not wanting to say cubic is more important than other protocols, but= it still represents a significant share of internet traffic. And any sch= eme to counter bufferbloat, could do worse than accept that reality and t= o allow for sufficient buffering to allow such protocols acceptable level= s of utilization (all the while keeping the latency under load increase u= nder control). I didn't get what you were trying to say with your last sentence. My point was that the BDP rule of thumb was tied to a specific type of=20 congestion controls and that the buffer sizing rule should probably rather reflect the burst=20 absorption requirements (see "good queue" in the Codel paper) than specifics of congestion=20 controls. CC schemes that try to counter bufferbloat typically suffer under the presence of=20 loss-based congestion controls, because they only react to loss and this requires a full=20 buffer (unless an AQM is in place), which causes queuing delay. Regards, =C2=A0Roland