From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 97B213B29E for ; Sun, 10 Jul 2022 13:27:25 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1657474043; bh=TdzI3tB+LYH+qadZMjLWz35Hqe/uFGjraf2icuViczI=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=dKKbhe1PGGTDTvD4VH5j9wlwm0u1kVcaD43MPPduzpXthK2ReZqU4UELdm1Uai8gZ zWmZvWsoxPRU+MXemBgW2lHOk5OBSWV8DQu0DoB/x1Ly0SO/CX6rG7AiFdbt4BispO NBpeE2g+zuDPKmBw3hKMuuO3JKaGZeTrAzhxCCJ8= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from smtpclient.apple ([77.3.115.149]) by mail.gmx.net (mrgmx105 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MZCbB-1o62FG1sVV-00V6Xn; Sun, 10 Jul 2022 19:27:23 +0200 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.100.31\)) From: Sebastian Moeller In-Reply-To: <4E163307-9B8A-4BCF-A2DE-8D7F3C6CCEF4@ifi.uio.no> Date: Sun, 10 Jul 2022 19:27:22 +0200 Cc: =?utf-8?Q?Dave_T=C3=A4ht?= , bloat Content-Transfer-Encoding: quoted-printable Message-Id: References: <6458C1E6-14CB-4A36-8BB3-740525755A95@ifi.uio.no> <7D20BEF3-8A1C-4050-AE6F-66E1B4203EE1@gmx.de> <4E163307-9B8A-4BCF-A2DE-8D7F3C6CCEF4@ifi.uio.no> To: Michael Welzl X-Mailer: Apple Mail (2.3696.100.31) X-Provags-ID: V03:K1:F5ngxklhQBj0ypIho3oFKPVmk7Vvrh3pR6aijifZ8eqG6EFQPJi IlsDmcoKEJEyaTmdY44vZMO4MPOPRTGS24pC3mJu1qktbSH9qa0D/Thnvc0uOwKqytQX7uL Puz4e9/3oM4uXkWD+OgYsLFoRqV6l1YhuQQhKqHlAnhzoC5Tdu9FITsRbRTVL68e0eVlrD+ rSWEOGL0jyVpR+ZdzvVvA== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:NxFlgPvdxt4=:JkKCGCnJrgGs8bsvhpWqNk SahrsPiWGS5jebyYZfdzo1hvDGLogSKY5XUS2iteGOtTdfeOzpczUfU0vjUivcO1G7dIqmMt+ Jr38orUq553HL1sIT/7+WRoLfzg5zZCp4qpooplFbWZ593VHRsnI4XBDr/K9Opjm+Mi/KiVhH SrUAudI0JqUg06kfa7FTVmyQtODMGrEEuZ6o+wFjLvUPvALuHtl9yDLF8CjxKwcN0nTMKX2p/ Y2Uw4FOwL5gMIkQX4KPPTCt3BkEz6D7XeLdsmzTWiXzoMeeiNhcm/sOS5qP4TOEmNHfY2fsw4 1PsPfvUSmI+7FJMMP33s6MF2M0l9p5TmYCApOTyCDN9s/M/3PTd3iH82vdW4AlYug9HJNmjvi DR5jGcQgpWRgQxVkVEWbbPr7HZMl7LeOBIlkIowTiNPvi2d1YHvznsCAuOrQuor+bo/9iZHJl 1Bq4eoyqyAaDLRM2Fq/DgSs+4nEWO9saakV80tWn/b6XOQhTDbE2k2WyFiGInFiDTS2Xz/uPQ nJy2K37qMeimQJhdEzOcnUL8N6J+BbVDc7555jkadzMXbpvwhB17KZVt8Xo1ydbzopjmdgdCl yGgkCTzGYg/D42w6qnYMfE2m9+DzZSSVR9Baq+1ovBdjLZ0CnvfxgLrRrUVqLIHgsKJfdiRRS 09rkwcP9Z6O1JIZcWdxZPksnnSDcoJL1M7gb33jNgzxlkuivC/10VrMCfXEhKQVOFzIJ6AXvF RHyXiybu54yj2SlVhWvEJXzQ+kQET5ESvNiDMqL83xppbdtceVUqjmlSE3mZoEVdHa9NqGT0A AznO8hsHHwYE5DfbqUP2F2HEgTgmn2JpYSAprZPflcdMLxx8xUW5YWN9DNxBngrXECv10mvZ8 qSPDN9ZOBjgbf8jTXAyYC03A1nrLd7l13ax+0W3xM/24iqXvKwH32arbezobxWcJhJIRWI4Fu mCzs+sN+BCCR0io19+WwDbQZyYVdpUOkXxlRVhdDgJWlJn5LAeAndi1vCnvUcqxGIE5hnyjmA tXQfHMDGSTG8oJJ2hHcbKzNm3LpexgwDc/nD4+GJoizVATn6HtK+2RGt0tT38pISh0yI+/Xe6 yLSi8EyT+TV/xK1TY8QcOgMw4Vf9VbPFBYwfKFn5sssmuUwB/WSwL/HsA== Subject: Re: [Bloat] [iccrg] Musings on the future of Internet Congestion Control X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 10 Jul 2022 17:27:26 -0000 Hi Michael, so I reread your paper and stewed a bit on it. I believe that I do not buy some of your premises. e.g. you write: "We will now examine two factors that make the the present situation = particularly worrisome. First, the way the infrastructure has been = evolving gives TCP an increasingly large operational space in which it = does not see any feedback at all. Second, most TCP connections are = extremely short. As a result, it is quite rare for a TCP connection to = even see a single congestion notification during its lifetime." And seem to see a problem that flows might be able to finish their data = transfer business while still in slow start. I see the same data, but = see no problem. Unless we have an oracle that tells each sender (over a = shared bottleneck) exactly how much to send at any given time point, = different control loops will interact on those intermediary nodes. I = might be limited in my depth of thought here, but having each flow = probing for capacity seems exactly the right approach... and doubling = CWND or rate every RTT is pretty aggressive already (making slow start = shorter by reaching capacity faster within the slow-start framework = requires either to start with a higher initial value (what increasing IW = tries to achieve?) or use a larger increase factor than 2 per RTT). I = consider increased IW a milder approach than the alternative. And once = one accepts that a gradual rate increasing is the way forward it falls = out logically that some flows will finish before they reach steady state = capacity especially if that flows available capacity is large. So what = exactly is the problem with short flows not reaching capacity and what = alternative exists that does not lead to carnage if more-aggressive = start-up phases drive the bottleneck load into emergency drop territory? And as an aside, a PEP (performance enhancing proxy) that does not = enhance performance is useless at best and likely harmful (rather a PDP, = performance degrading proxy). The network so far has been doing = reasonably well with putting more protocol smarts at the ends than in = the parts in between. I have witnessed the arguments in the "L4S wars" = about how little processing one can ask the more central network nodes = perform, e.g. flow queueing which would solve a lot of the issues (e.g. = a hyper aggressive slow-start flow would mostly hurt itself if it = overshoots its capacity) seems to be a complete no-go. I personally think what we should do is have the network supply more = information to the end points to control their behavior better. E.g. if = we would mandate a max_queue-fill-percentage field in a protocol header = and have each node write max(current_value_of_the_field, = queue-filling_percentage_of_the_current_node) in every packet, end = points could estimate how close to congestion the path is (e.g. by = looking at the rate of %queueing changes) and tailor their = growth/shrinkage rates accordingly, both during slow-start and during = congestion avoidance. But alas we seem to go the path of a relative dumb = 1 bit signal giving us an under-defined queue filling state instead and = to estimate relative queue filling dynamics from that we need many = samples (so literally too little too late, or L3T2), but I digress. Regards Sebastian > On Jun 20, 2022, at 14:58, Michael Welzl wrote: >=20 >=20 >=20 >> On Jun 19, 2022, at 6:53 PM, Sebastian Moeller via Bloat = wrote: >>=20 >> I might be out to lunch here, but why not accept a "total" speed = limit per TCP flow and simply expect bulk transfers to employ more = parallel streams; which is what I think download manager apps are = already doing for a long time? >>=20 >> And if we accept an upper ceiling per TCP flow we should be able to = select a reasonable upper bound for the initial window as well, no? >=20 > Using multiple flows is a way to do it, albeit not a very good way = (better to use a better congestion control than just run multiple = instances - but of course, one works with what one can - a download = manager is on the receiver side and can achieve this there). This is not = related to the IW issue which is relevant for short flows, which are the = most common type of traffic by far (a point that our paper makes, along = with many prior publications). >=20 >=20 >>> On Jun 15, 2022, at 19:49, Dave Taht via Bloat = wrote: >>>=20 >>> ---------- Forwarded message --------- >>> From: Michael Welzl >>> Date: Wed, Jun 15, 2022 at 1:02 AM >>> Subject: [iccrg] Musings on the future of Internet Congestion = Control >>> To: >>> Cc: Peyman Teymoori , Md Safiqul Islam >>> , Hutchison, David = , >>> Stein Gjessing >>>=20 >>>=20 >>> Dear ICCRGers, >>>=20 >>> We just got a paper accepted that I wanted to share: >>> Michael Welzl, Peyman Teymoori, Safiqul Islam, David Hutchison, = Stein >>> Gjessing: "Future Internet Congestion Control: The Diminishing >>> Feedback Problem", accepted for publication in IEEE Communications >>> Magazine, 2022. >>>=20 >>> The preprint is available at: >>> https://arxiv.org/abs/2206.06642 >>> I thought that it could provoke an interesting discussion in this = group. >>>=20 >>> Figures 4 and 5 in this paper show that, across the world, network >>> links do not just become "faster=E2=80=9D: the range between the low = end and >>> the high end grows too. >>> This, I think, is problematic for a global end-to-end standard - = e.g., >>> it means that we cannot simply keep scaling IW along forever (or, if >>> we do, utilization will decline more and more). >>>=20 >>> So, we ask: what is the way ahead? Should congestion control really >>> stay end-to-end? >>=20 >> Do we really have any other option? It is the sender that = decides how much to dup into the network after all. Sure the network = could help by giving some information back as a hint (say a 4bit value = encoding the maximum relative queue-fill level measured along the full = one-way path) but in the end, unless the network is willing to police = its idea about acceptable send behavior it is still the sender's = decision what tho send when, no? >=20 > In a scenario where a connection-splitting PEP is installed before a = lower-capacity downstream path segment, this PEP can already ask for = more data today. It=E2=80=99s doing it in an ugly way, by = =E2=80=9Ccheating=E2=80=9D TCP, which yields various disadvantages=E2=80=A6= so I=E2=80=99d say that this is part of the problem. PEPs exist, yet = have to do things poorly because they are treated as if they shouldn=E2=80= =99t exist, and so they become unpopular for, well, having done things = poorly... >=20 >=20 >> Given the discussion about L4S and FQ it seems clear that the = "network" is not prepared to implement anything close to what is = required to move congestion control into the network... I have a feeling = though that I am missing your point and am barking up the wrong tree ;) >=20 > I guess you are. This is about middleboxes doing much =E2=80=9Cheavier=E2= =80=9D stuff. >=20 > Cheers, > Michael