From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 04BBD3B29D for ; Mon, 5 Apr 2021 17:49:02 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1617659341; bh=sb+0vA0a1RZHpsUdpkvevZaZT7pcnJqphreHLWT7nAI=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=DpyVE54SZyw4XP6YsRL2yr/fbpRUBRSkNEnA1v68cKAisD8w0piJ2Xh640300aINk 55TLeYRWOyj+kwOCHG0kdGSdopJliRr6FENraFwADjvD3D9R45fJkO9bflFFFaJlCs Yuf8xBwjpRRuts3YsQthsEJJO4J8moOx7F0Av/W8= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [192.168.42.229] ([77.10.86.175]) by mail.gmx.net (mrgmx105 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MMXQF-1lCH3S0gyN-00JX72; Mon, 05 Apr 2021 23:49:01 +0200 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.17\)) From: Sebastian Moeller In-Reply-To: <9A233C8C-5C48-4483-A087-AA5FE1011388@gmail.com> Date: Mon, 5 Apr 2021 23:49:00 +0200 Cc: bloat Content-Transfer-Encoding: quoted-printable Message-Id: <320ECDF6-8B03-4995-944B-4726B866B2D3@gmx.de> References: <9A233C8C-5C48-4483-A087-AA5FE1011388@gmail.com> To: Rich Brown X-Mailer: Apple Mail (2.3445.104.17) X-Provags-ID: V03:K1:gyUyX5Z8azCherBigR2beKPW80M39wqFdvA9rMyHgI2h7whpy5l zm/qEoR5B3QvE81zaYHyD21JpXVbSLPXDhk3BjXC/8PE2O/YsY1F2aIFNkhITim70HupgbS /YnpcSh4qGClOkov9A76L1WuzuIErG7EZCWsJ2avhszht7OQv4+EVtKvwt7qwRaRjGixUsV +waHOPGsBwjZyA5M5c6oQ== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:6CTulgZgnBM=:u3IYXMN6CuChx3TZkMfkFo rBcZtXxUjda5pGDq6fNwKVnX1Ti74bA+Y6rdh10h6sxQRLEhri5DQ1jwTewZMUDgc5R5+xbKo l30hhz6pMl41RxOn4jc3sB3nfBHHJ655szJLsDZpLqTyqLYpUTASF4ce/aAB0pGFeEEZSvZdG 5lx4mMXDoVhu2QO/VeL6NnSW1GhruiCYK/MrtSEGjvL5lCdsDioU/JXbaOAOCTT8EaCYveXhj DTeMhIEdGFCQDDZQFl+oQSnfOL2/lnyd7AXTAJsHxAbuHecFhdfLQuz6pBWGTjo5jQRRI2oiz 2dPPYSbUyIzXwcxZ2DshCuJjmwKAFajOWAtAU15wFU9fgDJ4ZmRdOImOtzCvdb5MONUiaKzBe wIctGUcv189TEUL7DNvK8IrgtOoc0RMqlI3rROPsXiXuP5lAd6FbbVqQZ/9v4POFj/wFcEs7R JXbtA0sl3ugo4hGA77ZYj3hU3rbIyn2WtBkb/pQjghAue/tJEP959tjc5LVjjnYarKYlPyw9y t5cJjsVw3wRFclwBQdyqvY6aeANxU8PZp63V+GnY43ZnRjGgHPW9u/MMm+WqBz2uSKe6hOJVL kDlBSQk9xjdWVznYsGYGbMsU9Nfi7x6sVuXjZPh/6JQI9ChvMjsZ+iI1R6VL3nOAew3hKyIQ8 76RbKFUxo/zNkqBHdsynsJOHL1xPH3yewK7j7gluuYvSj3th0D9xm8NPV+b6NviA400tPslS3 zVmHeJ75Z43Ry4w0oO2Wqy23nI7jube188yVEFaK9XzzSnUh4NTQXQgpagBnEgUHTJ5weHJSk BqdUhWsOhDMj29oX+Zk2m9WHmj+0fbWD/5t7F0/Sx4w9BMPx9euxocDduUtixB2qDgbkT04vq hH8H+rGC5crbt3PwWl4fQuquPMd3ON5JpCAmz7xYzDgiFFIPcB+fe88ljZ5W7zHyzTj9QyVAX MYiwC8fTmRHKhXeQpKGI1Atz2D6f1gpeVTYapWOI81cHylpDiM+We8V75IMMdsbcQvW58xK+w vb+DAPFp4bX9YyZ/jo/y08ptpRRhb/5mpMihr6F0Git1drp50QsZKvWVvqbCbTGi9hOtznajy QYV2mMf0P1O6IoMwzKJl0G/ZwX4gQo2azYrZwGraKC+e6Ov79ZOsp3iPZse31DAVVbCjOEEdp 9QJ8sPm7+mz5z0PTByLc8qkiLpLcrDgfi2sgRCoTU50Y3m65nCAve3MFvP1ZgEYXiIdkQ= Subject: Re: [Bloat] Questions for Bufferbloat Wikipedia article X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2021 21:49:03 -0000 Hi Rich, all good questions, and interesting responses so far. > On Apr 5, 2021, at 14:46, Rich Brown wrote: >=20 > Dave T=C3=A4ht has put me up to revising the current Bufferbloat = article on Wikipedia (https://en.wikipedia.org/wiki/Bufferbloat) >=20 > Before I get into it, I want to ask real experts for some guidance... = Here goes: >=20 > 1) What is *our* definition of Bufferbloat? (We invented the term, so = I think we get to define it.)=20 >=20 > a) Are we content with the definition from the bufferbloat.net site, = "Bufferbloat is the undesirable latency that comes from a router or = other network equipment buffering too much data." (This suggests = bufferbloat is latency, and could be measured in seconds/msec.) >=20 > b) Or should we use something like Jim Gettys' definition from the = Dark Buffers article (https://ieeexplore.ieee.org/document/5755608), = "Bufferbloat is the existence of excessively large (bloated) buffers in = systems, particularly network communication systems." (This suggests = bufferbloat is an unfortunate state of nature, measured in units of = "unhappiness" :-)=20 I do not even think these are mutually exclusive; "over-sized = but under-managed buffers" cause avoidable variable latency, aka Jitter, = which is the bane of all interactive use-cases. The lower jitter the = better, and jitter can be measured in units of time, but also acts as = "currency" in the unhappiness domain ;). The challenge is that we know = that no/too small buffers cause undesirable loss of throughput (but = small latency under load), while too large buffers cause undesirable = increase in latency under load (but decent throughput), so the challenge = is to get buffering right to keep throughput acceptably high, while at = the same time keeping latency under load acceptable low... The solution basically is large buffers with adaptive management = that works hard to keep latency under load increase and throughput = inside an acceptable "corridor". > c) Or some other definition? >=20 > 2) All network equipment can be bloated. +1; depending on condition. Corollary: static buffer sizing is = unlikely to be the right answer unless the load is constant... > I have seen (but not really followed) controversy regarding the amount = of buffering needed in the Data Center. Conceptually the same as everywhere else, just enough to keep = throughput up ;) But e.g. for traditional TCPs the amount of expected = buffer needs increases with RTT of a flow, so intra-datacenter flows = with low RTTs will only require relative small buffers to cope. > Is it worth having the Wikipedia article distinguish between Data = Center equipment and CPE/home/last mile equipment? That depends on our audience, but realistically over-sized but = under-managed buffers can and do occur everywhere, so maybe better = include all? > Similarly, is the "bloat condition" and its mitigation qualitatively = different between those applications? IMHO, not really, we have two places to twiddle, the buffer (and = how it is managed) and the two endpoints transferring data. Our go to = solution deals with buffer management, but protocols can also help, e.g. = by using pacing (spreading out packets based on the estimated = throughput) instead of sending in bursts. Or using different protocols = that are more adaptive to the perceived buffering along a path, like BBR = (which as you surely knows, tries to actively measure a path's capacity = by regularly sending closely spaced probe packets and measures the = induced latency increase from those, interpreting to much latency as = sign that the capacity was reached/exceeded). Methods at both places are not guaranteed to work hand in hand = though (naive BBR fails to recognize an AQM on the path that keeps = latency under load well-bounded, which was noted and fixed in later BBR = incarnations); making the whole problem space "a mess". > Finally, do any of us know how frequently data centers/backbone ISPs = experience buffer-induced latencies? What's the magnitude of the impact? I have to pass, -ENODATA ;) >=20 > 3) The Wikipedia article mentions guidance that network gear should = accommodate buffering 250 msec of traffic(!) Is this a real "rule of = thumb" or just an often-repeated but unscientific suggestion? Can = someone give pointers to best practices? I am sure that any fixed number will be wrong ;) there might be = numbers worse than others though. >=20 > 4) Meta question: Can anyone offer any advice on making a wholesale = change to a Wikipedia article? Maybe don't? Instead of doing this in one go, evolve the = existing article piece-wise, avoiding the wrong impression of a hostile = take-over? And allowing for a nicer history of targeted commits? > Before I offer a fork-lift replacement I would a) solicit advice on = the new text from this list, and b) try to make contact with some of the = reviewers and editors who've been maintaining the page to establish some = bona fides and rapport... I guess, if you get the buy-in from the current maintainers a = fork-lift upgrade might work... Best Regards Sebastian >=20 > Many thanks! >=20 > Rich > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat