From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 1F8653B29D for ; Tue, 6 Apr 2021 02:31:10 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1617690662; bh=8tdnYj3kCIPWlkBrUs6wXFjmWl0tPAQndZZIW2sGtVQ=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=aNt74CRMIs/UBbY97cb4cVMDNMAHIgX7xF99MJrVwpBus3nCWQct/WUG1Cli8yoxi 20ykk3IYc5FYNgVQ0/6w8O8mOCoq1vfSwvJ99fFzMx0StSCMOcQQv0FGJ9mlKoIssa KBCoXhKtc3YfHTVQVkCGovP8WUcnkNr+w1+zkWY4= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [192.168.42.229] ([95.116.76.8]) by mail.gmx.net (mrgmx105 [212.227.17.168]) with ESMTPSA (Nemesis) id 1M2f5T-1lRCna3kpL-004C3j; Tue, 06 Apr 2021 08:31:01 +0200 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.17\)) From: Sebastian Moeller In-Reply-To: <20210406004735.GA16266@unix-ag.uni-kl.de> Date: Tue, 6 Apr 2021 08:31:01 +0200 Cc: bloat@lists.bufferbloat.net Content-Transfer-Encoding: quoted-printable Message-Id: <31F4A461-F3D6-42FF-B6A5-33F4824F97BC@gmx.de> References: <9A233C8C-5C48-4483-A087-AA5FE1011388@gmail.com> <320ECDF6-8B03-4995-944B-4726B866B2D3@gmx.de> <20210406004735.GA16266@unix-ag.uni-kl.de> To: Erik Auerswald X-Mailer: Apple Mail (2.3445.104.17) X-Provags-ID: V03:K1:X+LRhuuNw1zpf5UmX6tNfg3WkK2m2Pzwyo8VaClIXU3wihbFZWi IWdyN1sChOQVhRu+U3cgkHaYqvgnVvn/Zevwtb2f/yWAz5cHvRAG+K3ui2+Tb312N2hYhe5 IZxoox49IFPv4kreomPdb1Ca5KqfjEZMNeHc7SpYtUWxi6EgWN4wkeuGJEyal7romUat3Py dGHI0xYZuo4fX3LF4fZKw== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:EzXQKv7xNHY=:k5/YR9qn2YoSCg/d5b0n5u 57iURnHrEjK1Dirverpn6pfNYeZ6qMQ+AWbZs3Xs4Prhu6Q2yuLT7tOcmtb4RSwqIGX7oQNL6 /9hkyBeXgg5H4NbFnqAfasQD6Ui+6k0gWJRXjosFe9RSvHVjg85vqDVsiqqyxafkUceZCow1i h+tTqqA3UKBTEHaTH2nd9TKIbWvlejKGj54IOWhqgAGV0W3wPOvNyzeyYmzzRVxt/swkSqy88 vRizGjwylJYUOJpIiPxGL67X4nfIw9kf66BWriGLVH+6sj8Qc1wAOz8EenSyWk4nP7QeEeB7h XiyVe+ZgUlzvrZgqVeQo1hFccOeZRQiMwZm6zs2QSgZK4lvu1tvDZAFyjjZLxj6kqvwhZhxFQ lnGDi6+ylXixBRBjaYA7tlgrg6uF5FPwgX2gHvV1MT470K4dZEKVmJli5nylogmSUXHmwZ2VH OoSvFthBZBn42USpSrmMda0nWTXUDImLiVnNNu+inlLdM4QsQaMKtr3AnQfjgGz1ahUoI5DR5 OKVGov7kcBEkk9P/liQ+/Jug9DDrtx49BMwiWtvt3VUScTQta7uTiYBhMfvoKUb21A9mJoQng 4hRq1NxUF/EuoZ4ND0yStUwpDInvVtpi6hMInoSIg9At3CU8qWrza2Hv9eFbUVicpp5gHYTuB fOGmYMrsQFz3o9nEkJxDpsuuGxbsLNcKoAt6jSOIV3jmns0M9aMK5UZyWiaUYN99dxyMhWE3r zGCYL9ghHP1vN0BNPS6zgHxx1oLhd9x7lcnM3JfyGOm+KWckpSU/VIpkhnPUKOG9erA2bZSo9 Ezzb+Wil6oziBnTdrgYKEfwPjxP8pWwKaOa6hIUWNhGm9TqgMFygHbJQhoqlaZMMhyph5VuIV S8z0tk4+MHXmGbQej18uQGWpGl715vQNlmH1rGzCzGnG4QGlgXehDAjofI1CGEJkN8py+xpLw mS9QweWmjC5cc3lkRR9iGKg5tII15g5EjqutcVV01G8uXZ/6sUCVn+TfawyseetCtDSd2PCdZ 8eCisOmb0tUpBrflJkCi8Cju/EA6gKINmGxnW+Wx/A2ipmBiZ85L21Avwk3sZQ87PuoJRyVdC bfwmvE+4TKqNOcXdzjB1GtbVGusvWf213gXxy3C12/6qbnjt6Z7jPOTwjSfrqmOvj1riEu4gN kv4dvryV9dSD1TmMPwjKBCCznzwfRLUAqFY1c6qsPS+Ku5RC1eZzVNHTO6f5yKAXeFJ8Q= Subject: Re: [Bloat] Questions for Bufferbloat Wikipedia article X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 06 Apr 2021 06:31:11 -0000 Hi Eric, thanks for your thoughts. > On Apr 6, 2021, at 02:47, Erik Auerswald = wrote: >=20 > Hi, >=20 > On Mon, Apr 05, 2021 at 11:49:00PM +0200, Sebastian Moeller wrote: >>=20 >> all good questions, and interesting responses so far. >=20 > I'll add some details below, I mostly concur with your responses. >=20 >>> On Apr 5, 2021, at 14:46, Rich Brown = wrote: >>>=20 >>> Dave T=C3=A4ht has put me up to revising the current Bufferbloat = article >>> on Wikipedia (https://en.wikipedia.org/wiki/Bufferbloat) >>> [...] >> [...] while too large buffers cause undesirable increase in latency >> under load (but decent throughput), [...] >=20 > With too large buffers, even throughput degrades when TCP considers > a delayed segment lost (or DNS gives up because the answers arrive > too late). I do think there is _too_ large for buffers, period. Fair enough, timeouts could be changed though if required ;) but = I fully concur that laergeish buffers require management to become = useful ;) >=20 >> The solution basically is large buffers with adaptive management that >=20 > I would prefer the word "sufficient" instead of "large." If properly managed there is no upper end for the size, it might = not be used though, no? >=20 >> works hard to keep latency under load increase and throughput inside >> an acceptable "corridor". >=20 > I concur that there is quite some usable range of buffer capacity when > considering the latency/throughput trade-off, and AQM seems like a = good > solution to managing that. I fear it is the only network side mitigation technique? >=20 > My preference is to sacrifice throughput for better latency, but then > I have been bitten by too much latency quite often, but never by too > little throughput caused by small buffers. YMMV. Yepp, with speedtests being the killer-application for fast = end-user links (still, which is sad in itself), manufacturers and ISPs = are incentivized to err on the side of too large for buffers, so the = default buffering typically will not cause noticeable under-utilisation, = as long as nobody wants to run single-flow speedtests over a = geostationary satellite link ;). (I note that many/most speedtests = silently default to test with multiple flows nowadays, with single = stream tests being at least optional in some, which will reduce the = expected buffering need). >=20 >> [...] >> But e.g. for traditional TCPs the amount of expected buffer needs >> increases with RTT of a flow >=20 > Does it? Does the propagation delay provide automatic "buffering" in = the > network? Does the receiver need to advertise sufficient buffer = capacity > (receive window) to allow the sender to fill the pipe? Does the = sender > need to provide sufficient buffer capacity to retransmit lost = segments? > Where are buffers actually needed? At all those places ;) in the extreme a single packet buffer = should be sufficient, but that places unrealistic high demands on the = processing capabilities at all nodes of a network and does not account = for anything unexpected (like another low starting). And in all cases = doing things smarter can help, like pacing is better at the sender's = side (with better meaning easier in the network), competent AQM is = better at the bottleneck link, and at the receiver something like TCP = SACK (and the required buffers to make that work) can help; all those = cases work better with buffers. The catch is that buffers solve = important issues while introducing new issues, that need fixing. I am = sure you know all this, but spelling it out helps me to clarify my = thoughts on the matter, so please just ignore if boring/old news. >=20 > I am not convinced that large buffers in the network are needed for = high > throughput of high RTT TCP flows. >=20 > See, e.g., https://people.ucsc.edu/~warner/Bufs/buffer-requirements = for > some information and links to a few papers. Thanks, I think the bandwidth delay product is still the worst = case buffering required to allow 100% utilization with a single flow (a = use case that at least for home links seems legit, for a back bone link = probably not). But in any case if the buffers are properly managed their = maximum size will not really matter, as long as it is larger than the = required minimum ;) Best Regards Sebastian >=20 >> [...] >=20 > Thanks, > Erik > --=20 > The computing scientist=E2=80=99s main challenge is not to get = confused by > the complexities of his own making. > -- Edsger W. Dijkstra > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat