From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id B0F293CB35 for ; Fri, 21 Jun 2019 03:00:07 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1561100356; bh=gNXxyWQSYIDdCpQ6/44NY45HD6RrNkx75gP6dUixOLQ=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=S+x8fL8X+uogWxv3A7LYHBgutgIqpxRxnjUoI7kH1G1Act4mHe3PIKqTX4CT/6MZB SYRRa/NMcq9rn+Nev5e5ffg+aj/z4modBbqiHZqKNTdjEDv3h3trmffzyUqiEWDi1T 4twkETxHkGRzna0pHxI8XciFBPAo17EXxicHPJs4= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from hms-beagle2.lan ([77.185.38.99]) by mail.gmx.com (mrgmx002 [212.227.17.190]) with ESMTPSA (Nemesis) id 0LomJ1-1iGKcx0gWp-00gnA3; Fri, 21 Jun 2019 08:59:16 +0200 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.11\)) From: Sebastian Moeller In-Reply-To: <350f8dd5-65d4-d2f3-4d65-784c0379f58c@bobbriscoe.net> Date: Fri, 21 Jun 2019 08:59:14 +0200 Cc: "Holland, Jake" , "ecn-sane@lists.bufferbloat.net" , tsvwg IETF list Content-Transfer-Encoding: quoted-printable Message-Id: <46D1ABD8-715D-44D2-B7A0-12FE2A9263FE@gmx.de> References: <350f8dd5-65d4-d2f3-4d65-784c0379f58c@bobbriscoe.net> To: Bob Briscoe X-Mailer: Apple Mail (2.3445.104.11) X-Provags-ID: V03:K1:QEuTWEnhlhk5VqoZX2fH7ymWTnWwqKL/eEfCINl9HDoqd0DTtfu OwgMsUtyc6OeG3ulYd4i/TarigiG/Kd6BsZ3Hdob++qVsksa55Q8h14h/h2mLTveGoJEgM5 CDa7jWDbh+VRe1OcrSnGmal3P5iVrnxbi2buagxqAkHauJQlVHSkkNMZxBOPue/W2EfApr7 eSq4mGwX3zxRSuK1vvP9A== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:BDATKlfEqGU=:nKa2W9aur2gRvxZroWGDxx duzxQp8XgNzsPZxCvBfT3g74YpbC0qZJ5eNyHb0ZfKAG4HCNKgC9pKOamLIhps46WygUjz+zt 7UbN9kPcdFFrJ9JkdzpkGgTLZtZYcYjgqWqqSlXynONIs9HgZDsArHBYujMQj9F29SCN2Sm9w wpLx1CG1XwzhCYF2aJqyGGLgX/q4C+cTJL6DMSxZE/RFuPIUlGqH7UPIJQjfOZ9mHg5g2L48z UcVFDJ4jI7NDqIlzlgEHh7CqCmwUVESD/c/lG/qxa7nd7V6XImKaJYJ1ekw7zzWg/wFx2t349 rml8RV0ur40gfTILPJJQs+S8Iio2EfjT2O9FDFkkncCJXRN/Vs2fjlb0Yf2wuoyDzqWFg9Upc EtXEfO4531YuOE30rLqBjY24McmAWKgbgrWhmYHeDXAInO1y8eLjD5fv4MZ4oHUP2vcrcLUeU +DgXBYD0c7Pw+N4Mb6bq06cZcFoJK1/5adJZqcDNHnPlsvtnBJE6ag4RvzQBPsL8fJj0XDNBj REfn0ZrQ+QuoEnf2PcBYE2YfAAirYxtPh6FquMF4XZO/T+4nG/p9j3DD/X22wW/zGpwNmw1id 79vB28HTMFMvgfkB/0WNaLPcsGCWgtoVBz1rk6RAjE6VezRgk1vptuN7pkr905coUGl07LnP2 hcj0eYfisYYGJ8q6O1mTGyT05qreDDH1X66tlrBRUKYJ4SgeJgPhEc3xxWxt3XjPCIFfZGAL+ Zszzhc2p/9sHSf+e8ZVSQu9bblyAwLP5RK4oiRnUrNUljUH7dlfqXVF7pMwc1ZwQNWcp38i4N NccBmAgI2A/slk4w2gUMFuHlzUIw4fqJrulzPbISPx+cXiCLtQDdLG94bx2eecyIldorhR1wO F8tenFZ9N0KQgSHArV+W4H6BIRFVlw18A4bcTL3+OR1Yc7H5exjArh7xGNW5fCsnh2fg7sIE6 5Na1GH4RrVi6tw0aYeEI5QM739uXTQpa/T3HtFMMZqAXjr9grKXMF Subject: Re: [Ecn-sane] per-flow scheduling X-BeenThere: ecn-sane@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussion of explicit congestion notification's impact on the Internet List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Jun 2019 07:00:08 -0000 > On Jun 19, 2019, at 16:12, Bob Briscoe wrote: >=20 > Jake, all, >=20 > You may not be aware of my long history of concern about how per-flow = scheduling within endpoints and networks will limit the Internet in = future. I find per-flow scheduling a violation of the e2e principle in = such a profound way - the dynamic choice of the spacing between packets = - that most people don't even associate it with the e2e principle. Maybe because it is not a violation of the e2e principle at all? My = point is that with shared resources between the endpoints, the endpoints = simply should have no expectancy that their choice of spacing between = packets will be conserved. For the simple reason that it seems generally = impossible to guarantee that inter-packet spacing is conserved (think = "cross-traffic" at the bottleneck hop along the path and general = bunching up of packets in the queue of a fast to slow transition*). I = also would claim that the way L4S works (if it works) is to synchronize = all active flows at the bottleneck which in tirn means each sender has = only a very small timewindow in which to transmit a packet for it to = hits its "slot" in the bottleneck L4S scheduler, otherwise, L4S's low = queueing delay guarantees will not work. In other words the senders have = basically no say in the "spacing between packets", I fail to see how L4S = improves upon FQ in that regard. IMHO having per-flow fairness as the defaults seems quite reasonable, = endpoints can still throttle flows to their liking. Now per-flow = fairness still can be "abused", so by itself it might not be sufficient, = but neither is L4S as it has at best stochastic guarantees, as a single = queue AQM (let's ignore the RFC3168 part of the AQM) there is the = probability to send a throtteling signal to a low bandwidth flow (fair = enough, it is only a mild throtteling signal, but still). But enough about my opinion, what is the ideal fairness measure in your = mind, and what is realistically achievable over the internet? Best Regards Sebastian >=20 > I detected that you were talking about FQ in a way that might have = assumed my concern with it was just about implementation complexity. If = you (or anyone watching) is not aware of the architectural concerns with = per-flow scheduling, I can enumerate them. >=20 > I originally started working on what became L4S to prove that it was = possible to separate out reducing queuing delay from throughput = scheduling. When Koen and I started working together on this, we = discovered we had identical concerns on this. >=20 >=20 >=20 > Bob >=20 >=20 > --=20 > ________________________________________________________________ > Bob Briscoe http://bobbriscoe.net/ >=20 > _______________________________________________ > Ecn-sane mailing list > Ecn-sane@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/ecn-sane