From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [45.145.95.4]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id D137B3CB35 for ; Tue, 2 Nov 2021 08:14:41 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1635855279; bh=h4xRZkEgVtFVA53okHpRDgAcEZ/AzU368ja4FxBIqWY=; h=From:To:Subject:In-Reply-To:References:Date:From; b=n3GnZwNZgqmm/5uZjQsrxq76/KHL6AJInpOcjUKMBviWwcbdFay3USi0j+6hjxk6I mMU3yNDUE6KZCcKYo21173uk/YvTHfrN+dwclbwseXrMAISjK8rcZoei2M/gYUjZEH GwrfDpRcxXV3gZrpVua9CwoboeOjzZAq9SEjv4kN7Fywcnu0R3lYdXVcglH6AiCMar MgT5HshrR/n9Knl/A4UPwc2Xo7EP9lT8lo3gjsHiCwmTt7mZ1mtjkXBjEGvTInEPYG GH+DxtHnUZ5A2IIXBztVwrCEk3N94QdCa55+VppOOJYxGIqmnfYd4+nmkxXEEVLgBl aCk7II+3hySVA== To: =?utf-8?Q?Bj=C3=B8rn?= Ivar Teigen , bloat In-Reply-To: References: Date: Tue, 02 Nov 2021 13:14:38 +0100 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <875yta7o0h.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Bloat] Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Nov 2021 12:14:42 -0000 Bj=C3=B8rn Ivar Teigen writes: > Hi everyone, > > I've recently published a paper on Arxiv which is relevant to the > Bufferbloat problem. I hope it will be helpful in convincing AQM doubters. > Discussions at the recent IAB workshop inspired me to write a detailed > argument for why end-to-end methods cannot avoid latency spikes. I couldn= 't > find this argument in the literature. > > Here is the Arxiv link: https://arxiv.org/abs/2111.00488 I found this a very approachable paper expressing a phenomenon that should be no surprise to anyone on this list: when flow rate drops, latency spikes. > A direct consequence is that we need AQMs at all points in the internet > where congestion is likely to happen, even for short periods, to mitigate > the impact of latency spikes. Here I am assuming we ultimately want an > Internet without lag-spikes, not just low latency on average. This was something I was wondering when reading your paper. How will AQMs help? When the rate drops the AQM may be able to react faster, but it won't be able to affect the flow xmit rate any faster than your theoretical "perfect" propagation time... So in effect, your paper seems to be saying "a flow that saturates the link cannot avoid latency spikes from self-congestion when the link rate drops, and the only way we can avoid this interfering with *other* flows is by using FQ"? Or? Also, another follow-on question that might be worth looking into is short flows: Many flows fit entirely in an IW, or at least never exit slow start. So how does that interact with what you're describing? Is it possible to quantify this effect? -Toke