From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mailgw1.uni-kl.de (mailgw1.uni-kl.de [IPv6:2001:638:208:120::220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 550573B29D for ; Mon, 5 Apr 2021 16:30:44 -0400 (EDT) Received: from sushi.unix-ag.uni-kl.de (sushi.unix-ag.uni-kl.de [IPv6:2001:638:208:ef34:0:ff:fe00:65]) by mailgw1.uni-kl.de (8.14.4/8.14.4/Debian-8+deb8u2) with ESMTP id 135KUfV0155687 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 5 Apr 2021 22:30:41 +0200 Received: from sushi.unix-ag.uni-kl.de (ip6-localhost [IPv6:::1]) by sushi.unix-ag.uni-kl.de (8.14.4/8.14.4/Debian-4+deb7u1) with ESMTP id 135KUfU1005807 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Mon, 5 Apr 2021 22:30:41 +0200 Received: (from auerswal@localhost) by sushi.unix-ag.uni-kl.de (8.14.4/8.14.4/Submit) id 135KUf3l005806 for bloat@lists.bufferbloat.net; Mon, 5 Apr 2021 22:30:41 +0200 Date: Mon, 5 Apr 2021 22:30:41 +0200 From: Erik Auerswald To: bloat@lists.bufferbloat.net Message-ID: <20210405203041.GA24412@unix-ag.uni-kl.de> References: <9A233C8C-5C48-4483-A087-AA5FE1011388@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-Spam-Status: No, hits=-1, tests=ALL_TRUSTED=-1 X-Spam-Score: (-1) X-Spam-Flag: NO Subject: Re: [Bloat] Questions for Bufferbloat Wikipedia article - question #2 X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Apr 2021 20:30:44 -0000 Hi, On Mon, Apr 05, 2021 at 11:08:07AM -0700, David Lang wrote: > On Mon, 5 Apr 2021, Rich Brown wrote: > > >Next question... > > > >>2) All network equipment can be bloated. I have seen (but not > >>really followed) controversy regarding the amount of buffering > >>needed in the Data Center. Is it worth having the Wikipedia article > >>distinguish between Data Center equipment and CPE/home/last mile > >>equipment? Similarly, is the "bloat condition" and its mitigation > >>qualitatively different between those applications? Finally, do > >>any of us know how frequently data centers/backbone ISPs experience > >>buffer-induced latencies? What's the magnitude of the impact? I do not have experience with "web scale" data centers or "backbone" ISPs, but I think I can add related information. >From my work experience with (mostly) enterprise and service provider networks I would say that bufferbloat effects are relatively rarely observed there. Many network engineers do not know about bufferbloat and do not believe in its existence after being told about bufferbloat. I have seen a latency consideration for a country-wide network that explicitly excluded queuing delays as irrelevant and cited just propagation and serialization delay as relevant for the end-to-end latency. Demonstrating bufferbloat effects with a test setup with prolonged congestion is usually labeled unrealistic and ignored. Campus networks and ("small") data centers are usually overprovisioned with bandwidth and thus do not exhibit prolonged congestion. Additionally, a lot of enterprise networking gear, specifically "switches," do not have oversized buffers. Campus networks more often show problems with too small buffers for a given application (e.g., cameras streaming data via RTP with large "key frames" sent at line rate), such that "microbursts" result in packet drops and thus observable problems even with low bandwidth utilization over longer time frames (minutes). The idea that buffers could be too large does not seem realistic there. "Routers" for the ISP market (not "home routers", but network devices used inside the ISP's core and aggregation networks and similar) often do have unreasonably ("bloated") buffer capacity, but they are usually operated without persistent congestion. When persistent congestion does happen on a customer connection, and bufferbloat does result in unusably high latency, the customer is often told to send at a lower rate, but "bufferbloat" is usually not recognized as the root cause, and thus not addressed. It seems to me as if "bufferbloat" is most noticable on the consumer end of mass market network connections. I.e., low margin markets with non-technical customers. If CAKE behind the access circuit of an end customer can mitigate bufferbloat, then bufferbloat effects are only visible there and do not show up in other parts of the network. > the bandwidth available in datacenters is high enough that it's much > harder to run into grief there (recognizing that not every piece of > datacenter equipment is hooked to 100G circuits) That is my impression as well. > I think it's best to talk about excessive buffers in terms of time > rather than bytes, and you can then show the difference between two > buffers of the same size, one connected to a 10Mb (or 1Mb) DSL upload > vs 100G datacenter circuit. After that one example, the rest of the > article can talk about time and it will be globally applicable. I too think that _time_ is the important unit regarding buffers, even though they are mostly described in units of data (bytes or packets). Thanks, Erik -- To have our best advice ignored is the common fate of all who take on the role of consultant, ever since Cassandra pointed out the dangers of bringing a wooden horse within the walls of Troy. -- C.A.R. Hoare