From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id C615C3B29E for ; Tue, 6 Apr 2021 17:30:59 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1617744645; bh=iRI28ugQ89mT9bF49xArWZoDc9MozCGELOGPWeRhfps=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=WN497BceMnPYmgGrr+gFkRxCaQ/DFQWbOPCBtD8QzE6Op1RJNQBJInyckhOtL3E0x YgIB0rxMaTOkf/PEVAvuSeu/4YUc7f68Qm/AUjo/lwFv5n64SpUL/DMdWRrESuZWne IJPds6OmoGBoprS+JHXSJxAtijIG1qUTLbdRTDfo= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [192.168.42.229] ([95.116.76.8]) by mail.gmx.net (mrgmx104 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MTAFb-1l0zdq3IoZ-00UY4g; Tue, 06 Apr 2021 23:30:45 +0200 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.17\)) From: Sebastian Moeller In-Reply-To: <033d63b1-8c7e-751a-5768-96ffcd825fce@kit.edu> Date: Tue, 6 Apr 2021 23:30:44 +0200 Cc: Erik Auerswald , bloat@lists.bufferbloat.net Content-Transfer-Encoding: quoted-printable Message-Id: <2C969F6A-660D-4806-8D46-5049A2E96F74@gmx.de> References: <9A233C8C-5C48-4483-A087-AA5FE1011388@gmail.com> <320ECDF6-8B03-4995-944B-4726B866B2D3@gmx.de> <20210406004735.GA16266@unix-ag.uni-kl.de> <31F4A461-F3D6-42FF-B6A5-33F4824F97BC@gmx.de> <033d63b1-8c7e-751a-5768-96ffcd825fce@kit.edu> To: "Bless, Roland (TM)" X-Mailer: Apple Mail (2.3445.104.17) X-Provags-ID: V03:K1:/Q5LoZC7Ud3tZp3XzOwSj9tvOaBCOrp/JC/0Vpfrpk9Sj9CjGbN ZwwXu/cBQUJXcmkbUIwegRJCkao6k94xscDmnlpbLWtzv4KzPw9ZmAAhxWZJVy76hfSnL0g TFzQOCXRTrgrgAw+f1Dqri07zBdbhzEwWnlZ0krWQC6f4tG6qbzOh1MxQ2jysD70npo6bA5 O/phvODpg5o6AhgGEQuag== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:6n8l9GfzavI=:H172Bn1YH6kukquCRvuCrq /rxSsRQoEU2M5GL4BPvRHqmytwP1L1a1BPPZTXKlRI90f5h44AzZupixPIRwYuvVPqVxo7zvE 2Zj83fBSXKSXN87siaCROZ1oiVIbD/Iu5H8rF2NBpQ9uoYgn02h6bu6F4vE7hGIWptCBMGtAS FAnd//w3CUu5A4I3kCNQbCuHkszPteRZNpUvfaUQ7MMl9AjQl3qvk9QLx+q5BVfaMkPn+iLz/ +ebNG7D2uyki5N6ZzhfJOWmJ+1HPioI3Wq8ggglR1Nyasjn8/2aAiGwgsgIOBf3tV7+A9+ZKe vGqoV8APkxmfkE13L4D1+Islb0ChBEf10z8UZ9GqKjsBvTh14rR9YkxOHORde4+CBdwLPzTKb Ro6/9VJcX86mYL1OZr+kk+hm8FqGhauiGma8onPpNlmWjjTqZx0Z7Y6aDLhRZKnOVmqXJwAEF KxZyApSBs2yKyvIHw4at1/CoFsdMmCbzZXl/RV/ccksjKmHKHwyDNWpmHNKgsPnMtHulMy0Hi dhzSjc4t8Z8F2sWx/ffkeGB806G4Osy43MK/w9kHFtyMOAOwsTpe+rwuHUDlvANrnE6QY0OIc 0uwDvcFJUuusz912qSA+2Dz1OO6aal0i664Qr0aUh0ynih7uEMQ8cuEKB2wtjrb75dmPt4PgM 4sX8U6cD667ANu4Vp1OXWBdVzgGS9gLWinrkTiOI59vyp/kl9qVqoQZ5UD3Y9iTxSJP+tr/rL ok9SsdZfTkIcBzBYsywHEVoxbRU8apsCzRgFVL3VU8kFF2KrdmVA/yBFudFkpLCMD7Y+X21F/ 5/lJVyGrnwdWdT2S18eMbqbyMDarx0cyqZr3BSQzZ/N1/5qucQJ8PfHAgqanbO9gv9C7x4/RO SPX5u6Np2nGsom1xPHaj33mr4H1LC8culgWoOeFZiSIywUJm4tBXwAlbDZafh6WkmCD0BOtAr OPrinio2CkyiEwN1X+T+fW/dJd2UlI+bBsu69unlnqHEr5yq1ly5J+LFz4zUM7/PXKTdYTZIm P23Dj81XS/Ty/jZEadUi7tv5Yv6MvL9BL+DA1wuRY7FWE0snqr8OZPPPYOyOMmIvZW/yBXGhU WQTNIJUH1Q7ii0UfjCH0QHxXfoSsFKPsxaUkq80UQTYrBQrTWUnqTsHITurQUOB4jyEH2N/7w /BBrgW9DIdgEL27Jg8rh2LApspAu2SvQn3WVO+JW85CXVYIW9kbCm00sgeZAZrpxPj6ro= Subject: Re: [Bloat] Questions for Bufferbloat Wikipedia article X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2021 21:31:00 -0000 Hi Roland, thanks, much appreciated. > On Apr 6, 2021, at 22:01, Bless, Roland (TM) = wrote: >=20 > Hi Sebastian, >=20 > see comments at the end. >=20 > On 06.04.21 at 08:31 Sebastian Moeller wrote: >> Hi Eric, >> thanks for your thoughts. >>> On Apr 6, 2021, at 02:47, Erik Auerswald = wrote: >>>=20 >>> Hi, >>>=20 >>> On Mon, Apr 05, 2021 at 11:49:00PM +0200, Sebastian Moeller wrote: >>>>=20 >>>> all good questions, and interesting responses so far. >>>=20 >>> I'll add some details below, I mostly concur with your responses. >>>=20 >>>>> On Apr 5, 2021, at 14:46, Rich Brown = wrote: >>>>>=20 >>>>> Dave T=C3=A4ht has put me up to revising the current Bufferbloat = article >>>>> on Wikipedia (https://en.wikipedia.org/wiki/Bufferbloat) >>>>> [...] >>>> [...] while too large buffers cause undesirable increase in latency >>>> under load (but decent throughput), [...] >>>=20 >>> With too large buffers, even throughput degrades when TCP considers >>> a delayed segment lost (or DNS gives up because the answers arrive >>> too late). I do think there is _too_ large for buffers, period. >> Fair enough, timeouts could be changed though if required ;) but = I fully concur that laergeish buffers require management to become = useful ;) >>>=20 >>>> The solution basically is large buffers with adaptive management = that >>>=20 >>> I would prefer the word "sufficient" instead of "large." >> If properly managed there is no upper end for the size, it might = not be used though, no? >>>=20 >>>> works hard to keep latency under load increase and throughput = inside >>>> an acceptable "corridor". >>>=20 >>> I concur that there is quite some usable range of buffer capacity = when >>> considering the latency/throughput trade-off, and AQM seems like a = good >>> solution to managing that. >> I fear it is the only network side mitigation technique? >>>=20 >>> My preference is to sacrifice throughput for better latency, but = then >>> I have been bitten by too much latency quite often, but never by too >>> little throughput caused by small buffers. YMMV. >> Yepp, with speedtests being the killer-application for fast = end-user links (still, which is sad in itself), manufacturers and ISPs = are incentivized to err on the side of too large for buffers, so the = default buffering typically will not cause noticeable under-utilisation, = as long as nobody wants to run single-flow speedtests over a = geostationary satellite link ;). (I note that many/most speedtests = silently default to test with multiple flows nowadays, with single = stream tests being at least optional in some, which will reduce the = expected buffering need). >>>=20 >>>> [...] >>>> But e.g. for traditional TCPs the amount of expected buffer needs >>>> increases with RTT of a flow >>>=20 >>> Does it? Does the propagation delay provide automatic "buffering" = in the >>> network? Does the receiver need to advertise sufficient buffer = capacity >>> (receive window) to allow the sender to fill the pipe? Does the = sender >>> need to provide sufficient buffer capacity to retransmit lost = segments? >>> Where are buffers actually needed? >> At all those places ;) in the extreme a single packet buffer = should be sufficient, but that places unrealistic high demands on the = processing capabilities at all nodes of a network and does not account = for anything unexpected (like another low starting). And in all cases = doing things smarter can help, like pacing is better at the sender's = side (with better meaning easier in the network), competent AQM is = better at the bottleneck link, and at the receiver something like TCP = SACK (and the required buffers to make that work) can help; all those = cases work better with buffers. The catch is that buffers solve = important issues while introducing new issues, that need fixing. I am = sure you know all this, but spelling it out helps me to clarify my = thoughts on the matter, so please just ignore if boring/old news. >>>=20 >>> I am not convinced that large buffers in the network are needed for = high >>> throughput of high RTT TCP flows. >>>=20 >>> See, e.g., https://people.ucsc.edu/~warner/Bufs/buffer-requirements = for >>> some information and links to a few papers. >=20 > Thanks for the link Erik, but BBR is not properly described there > "When the RTT creeps upward -- this taken as a signal of buffer = occupancy congestion" and Sebastian also mentioned: "measures the = induced latency increase from those, interpreting to much latency as = sign that the capacity was reached/exceeded". BBR does not use > delay or its gradient as congestion signal. Looking at https://queue.acm.org/detail.cfm?id=3D3022184, I = still think that it is not completely wrong to abstractly say BBR = evaluates RTT changes as function of the current sending rate to probe = the bottlenecks capacity (and adjust its sending rate based on that = estimated capacity), but that might either indicate I am looking at the = whole thing at too abstract a level, or, as I fear, that I am simply = misunderstanding BBR's principle of operation... (or both ;)) (Sidenote = I keep making: for a protocol believing it knows better than to = interpret all packet losses as signs of congestion, it seems rather an = ovrsight not having implemented a rfc3168 style CE response...) >=20 >> Thanks, I think the bandwidth delay product is still the worst = case buffering required to allow 100% utilization with a single flow (a = use case that at least for home links seems legit, for a back bone link = probably not). But in any case if the buffers are properly managed their = maximum size will not really matter, as long as it is larger than the = required minimum ;) >=20 > Nope, a BDP-sized buffer is not required to allow 100% utilization = with > a single flow, because it depends on the used congestion control. For > loss-based congestion control like Reno or Cubic, this may be true, > but not necessarily for other congestion controls. Yes, I should have hedged that better. For protocols like the = ubiquitous TCP CUBIC (seems to be used by most major operating systems = nowadays) a single flow might need BDP buffering to get close to 100% = utilization. I am not wanting to say cubic is more important than other = protocols, but it still represents a significant share of internet = traffic. And any scheme to counter bufferbloat, could do worse than = accept that reality and to allow for sufficient buffering to allow such = protocols acceptable levels of utilization (all the while keeping the = latency under load increase under control). Best Regards Sebastian >=20 > Regards, > Roland