From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id A2FD83CB35 for ; Wed, 26 Jun 2019 08:48:40 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1561553314; bh=0pl0wKfYAZnvYihvyT0APnEb2ojLDXGw0kfFFYPQp3k=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=ii0lwd/WpvPUo+LrEL9JT6dGpLvPqoUuI22opOGfExlnxszcHLr4KDOD9CyfNZP6U k1hYsoP+Gesvr7+ZgHG4Snorxo7I0JiSGfTx9CHjzgfnpIASDyjj8HNgQU59fyeMp3 8CVmc3bJAQO4V6GXTmRzYFw0ylObNz9JSptueL4w= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [192.168.250.102] ([134.76.241.253]) by mail.gmx.com (mrgmx104 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MhlGq-1iB4OA2aXI-00dlGX; Wed, 26 Jun 2019 14:48:34 +0200 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.11\)) From: Sebastian Moeller X-Priority: 3 (Normal) In-Reply-To: <1561241377.4026977@apps.rackspace.com> Date: Wed, 26 Jun 2019 14:48:32 +0200 Cc: Jonathan Morton , "ecn-sane@lists.bufferbloat.net" , Brian E Carpenter , tsvwg IETF list Content-Transfer-Encoding: quoted-printable Message-Id: <4E863FC5-D30E-4F76-BDF7-6A787958C628@gmx.de> References: <350f8dd5-65d4-d2f3-4d65-784c0379f58c@bobbriscoe.net> <46D1ABD8-715D-44D2-B7A0-12FE2A9263FE@gmx.de> <835b1fb3-e8d5-c58c-e2f8-03d2b886af38@gmail.com> <1561233009.95886420@apps.rackspace.com> <71EF351D-AFBF-4C92-B6B9-7FD695B68815@gmail.com> <1561241377.4026977@apps.rackspace.com> To: "David P. Reed" X-Mailer: Apple Mail (2.3445.104.11) X-Provags-ID: V03:K1:31PKxqFyIdWUcSKkFwgohDzZm4B7DtYdQX8GQ2+D2xMTlzRfr/Z N67AjfapX600nI6t6TjtV0usx2BhTD252dVqf5LdahiORLojGkEf+PM98xVGE5U8eZcJNXD JPc9Jz8I7Xve2PqBbf61oeJpW94OZawlpl+IWJdsvhhg/QeZmqIZzbu5/fDTfuMgN2+JImW WAJMnohPcQvx07qF3m/cQ== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:58Ub2CowqUE=:yp3iIrzsEjvjFZ0aJIcUqF e21ghI4xMOb3Njmxf/cZGyyiculo+frtOc6nRcGoqOIzmlylCH0sfJA7c65cng2fsKFHcw1ox Itlu5fJnVQ8oY4cHpX9lgTTbtoIV2FWx/emjKBM4+5mpNkJ1xxQc2Zaqv5BRzXH8yWwKArWSG GjqTXUORyDy55PLxwgBwd3ljQ2F6/rlpNSHlB2lqJSBhgJNv0NRmEK92BziYylAAbekt9BM4+ SIUgsO1MFWyg6YfrB7k1yKtegj7bEHawjBiFat9zqEj05P8axu+ldNALmouPXcNOYMWv3W9dX 2VRM3uje3oZBazHt4mde9qliUJoLl8EGxKOYjjvGi3geITuK05cTP3DjP727rlLa3Tl4iDuVy 3ZzMXfdNEs+PeIVV/pBh1nAXSvikfGs9ehNY8HyJsa+ppfUGYB/4rWVBNVtz1VfEt51haNDPQ bMRudGTTm70L0gzuI873hanidD1nFPILBQtPoiiwgV8t5709faWXREb+LAINducj8Vy0THXuw muUezg6bVroR226NNMNE7m3oqAp9S61jo0zhgvFecSnPiNQI0NybY4niyaiXmUXrD9US6+sCm M7VdlrEogbDpQsMjxazYMPRvoi4Ui8NlrDXdyGXYDcY6JQswb3iI73ylHVpPhg3Uy5jzSqCfh qX5xJWM47K2cnYdJszNTSOuH6hPpvbA8g7J+vwCyNm44GDQxvHPE59C5I/kycRZ5Tg/I4WSSJ YKoURdyqAZaICUCw1zKa/oZYe1ZLVwudNYK71o13ol0Rf96DMYi53z0GKjtmhKnprOmGykxBN DM0d+p8DuDbzEJ3gtVU4fjP04PptTZPai5M/85+neQ4qR86QD1M0aZ7cOq2CYD+pQep1VQhlB PUVGxmwWUjrztfAtS1fDXj/l+KhFrvLHZgzz4s5msUarKvv8s23Lc69Pr4uAUkWZs7bIc6KN3 BUkaTV/uLurWYH7AlRnDH1YwnFLg78XBU7zc4mcXH8+A38XW6zu2/v/0YhaflqVRkim+jWLRZ 55/UpekNWk0CZ2ZGL48UeHFNNvqOM7mF4CeDMU2i/e3z61CupwYdTBGbEzpS9+B5TRNprUkHF 2LBHYvOFhdJtR4= Subject: Re: [Ecn-sane] [tsvwg] per-flow scheduling X-BeenThere: ecn-sane@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussion of explicit congestion notification's impact on the Internet List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Jun 2019 12:48:41 -0000 > On Jun 23, 2019, at 00:09, David P. Reed wrote: >=20 > [...] > =20 > per-flow scheduling is appropriate on a shared link. However, the = end-to-end argument would suggest that the network not try to divine = which flows get preferred. > And beyond the end-to-end argument, there's a practical problem - = since the ideal state of a shared link means that it ought to have no = local backlog in the queue, the information needed to schedule "fairly" = isn't in the queue backlog itself. If there is only one packet, what's = to schedule? > =20 [...] Excuse my stupidity, but the "only one single packet" case is the = theoretical limiting case, no?=20 Because even on a link not running at capacity this effectively requires = a mechanism to "synchronize" all senders (whose packets traverse the hop = we are looking at), as no other packet is allowed to reach the hop = unless the "current" one has been passed to the PHY otherwise we = transiently queue 2 packets (I note that this rationale should hold for = any small N). The more packets per second a hop handles the less likely = it will be to avoid for any newcomer to run into an already existing = packet(s), that is to transiently grow the queue. Not having a CS background, I fail to see how this required synchronized = state can exist outside of a few steady state configurations where = things change slowly enough that the seemingly required synchronization = can actually happen (given that the feedback loop e.g. through ACKs, = seems somewhat jittery). Since packets never know which path they take = and which hop is going to be critical there seems to be no a priori way = to synchronize all senders, heck I fail to see whether it would be = possible at all to guarantee synchronized behavior on more than one hop = (unless all hops are extremely uniform). I happen to believe that L4S suffers from the same conceptual issue = (plus overly generic promises, from the RITE website: "We are so used to the unpredictability of queuing delay, we don=E2=80=99t= know how good the Internet would feel without it. The RITE project has = developed simple technology to make queuing delay a thing of the = past=E2=80=94not just for a select few apps, but for all." this seems = missing a conditions apply statement) Best Regards Sebastian=