From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 1BE503B29E for ; Mon, 26 Sep 2022 07:32:14 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1664191933; bh=3u4V4H65vpMhL8GNjfXEBj4SuugDfsxJNl2vXACGx5c=; h=X-UI-Sender-Class:Date:From:To:Subject:In-Reply-To:References; b=XmX3eY6LotPS5Kplk2jp3gu6tTVpOkTaMAOJdFCIdzrLgaEOCXZFtyvl+aVvKrHVf G9Txv4jBgEkuHdCRQySWi8El2O/+1FKfnUoPira/HGqEZENmFl+PX2fmy8eDDBFI5X 5bCBx4Kv4bzVfzE84RPVeRakA6qqV83erUXdUroY= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [127.0.0.1] ([134.76.241.253]) by mail.gmx.net (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MtwZ4-1pWnEu3x40-00uMSU; Mon, 26 Sep 2022 13:32:13 +0200 Date: Mon, 26 Sep 2022 13:29:39 +0200 From: Sebastian Moeller To: Dave Taht , Dave Taht via Bloat , bloat User-Agent: K-9 Mail for Android In-Reply-To: References: Message-ID: <1AFEEC18-AE85-4C0D-8E78-8BAEE4359D11@gmx.de> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary=----THU0QFMIC3AZ0AZ9K70NEHPUA2VDVT Content-Transfer-Encoding: 7bit X-Provags-ID: V03:K1:oaXBBiRJZKtM4d8J6A2THCIJf3MYo11RsgXtzPXXBQbmvSgPc/r +OSdHhhRHg0N4r5ObUH8Hmxdy7uCGJrIMDS0JtbeBg7XIEwIbfRhIYG+8v6RZqbDnuzK20N wpv2bAjaL2MvHBALtc0UV6+Z4TVHmNBvBQqk52q/I7pzhRGkLwzxbmbsRrbjzjNcgry44Uh 8baa8hx6RiYRiD42MBMqg== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:A7T3JnB1N5o=:CE+f9fjMDnWzY9w28DrWGo 2UzY/41RBIOQ9d9ooNnK+MheWPUR1jyw+RXdPCwsYTp7o7IL+d0iZAQQvLRV7u6308NBDqAuY lCPNztKpbp8sAM/b4rd1/s6M5BKopip4Ns7ebf4HmJXj+KEvre22v9Px0pZxTV52Vwdylok0T SFiUEUX90hgluWPQ3qrNAQWe75PijZsc3RE11tmFTaQCbIeIieB4UvM6cbTDuB3r196b/52mN JNe5vSyqWP0pfy5PnzcuG1Neeip3s9svEBdS1sN/gmm1Gdx9b+sfytQTQDM34J/UMfR/fh1Nn 3catLXCIHhZMzm09Ah51NwRHg2dRYDwB1aaFY5M7a8Z2w5EphjbwD5gkC2qhwoHpmN0iFjCwV LUroOOXgQ4I38QlIJ4rmWkEsN2a1afbXO8sjVRM6iLpyMyEQ/YiNK7UUovqmIfVg11LTf/bV/ 4rcE6k5VeRcMzSJr68E1M9KUc3dhm2s0EqCKtDGhOjD6hOir0vOdvhln/kdcVXvLtSJp4zSGG OcKk+GPBNXaT3ywmynrm0lEoqUgewX05lGLy3Ls95ua1zRw3YZ2HefzM23XhR7ijruwlNH9j1 dodAIdAPrNN3/7plDPL3LnMDwZRGxjhk+MxVurdh231f7LoNA1sJeGPzWEG6mfMKrb7h+wD15 6dt1WSHgdsTLFYihdVuzjRMnfuGQMwIcdNRYxwsq4xkEzVK4EQlVxgYHoeFSD98swMdM9Rha5 wTG6lsMm1ThCtE1oW+B0LGnIUtOsDX0IH0PCcmykXfgR+cdXAejD+wFO2bSZ36qUG0OHxmtZO 9/1KOtp2qZC0MzamqL3MqSBdgfvFXcHtQQEdK9amNufbWTaJTFfwkGNQduivRFfVqUY5lFaav 4BagFM1Iwhrn8y63ufO7ofWAVl/mENvpS76eFGNKbReDI0SQgx/7Rl7QP/WEDFZu+sAx5GV1F kWkSxDb1UWAFXcmjXJX1+vvC27kaZ5CQZtsaaoQZ8ISVVLl9id7X4Q+Fn+dZoMO7WNvYOcsfF gaRA6w96g8VJTjYj1qrTvPQtW1VFR6vINpeyh7VpghTKTqCTzfQSqqpzh3+oOsvteGpq0ilMV 1G0ppyyPOJgLVIiSNr6ShIsa3Kchl5rsSpllKmsSPJy3pBI3/EFEFPr9oiLamfCN5c3Q7YpKP NL4llc1XP0S9F50vtcHwQfAIwY Subject: Re: [Bloat] temporal and spacial locality X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Sep 2022 11:32:15 -0000 ------THU0QFMIC3AZ0AZ9K70NEHPUA2VDVT Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable IMHO this is another example of 'batching helps to ameliorate per-batch set= -up/processing costs'=2E The trick I would say is to size the batches in a = way that they do not introduce too much latency granularity=2E For a server= that might be a coarser granularity than for a client or a home router=2E = My gut feeling tells me that the acceptable batch size is related to the re= quired transmission/processing time of a batch=2E In a sense cake already coarsely takes this into account when disabling GS= O splitting at >=3D 1 Gbps rates=2E Maybe we could also scale the quantum m= ore aggressively, but it is a tradeoff=2E=2E=2E=2E Regards On 26 September 2022 03:19:30 CEST, Dave Taht via Bloat wrote: >Some good counterarguments against FQ and pacing=2E > >https://www=2Eusenix=2Eorg/system/files/nsdi22-paper-ghasemirahni=2Epdf > >--=20 >FQ World Domination pending: https://blog=2Ecerowrt=2Eorg/post/state_of_f= q_codel/ >Dave T=C3=A4ht CEO, TekLibre, LLC >_______________________________________________ >Bloat mailing list >Bloat@lists=2Ebufferbloat=2Enet >https://lists=2Ebufferbloat=2Enet/listinfo/bloat --=20 Sent from my Android device with K-9 Mail=2E Please excuse my brevity=2E ------THU0QFMIC3AZ0AZ9K70NEHPUA2VDVT Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable IMHO this is another example of 'batching helps to= ameliorate per-batch set-up/processing costs'=2E The trick I would say is = to size the batches in a way that they do not introduce too much latency gr= anularity=2E For a server that might be a coarser granularity than for a cl= ient or a home router=2E My gut feeling tells me that the acceptable batch = size is related to the required transmission/processing time of a batch=2E<= br>In a sense cake already coarsely takes this into account when disabling = GSO splitting at >=3D 1 Gbps rates=2E Maybe we could also scale the quan= tum more aggressively, but it is a tradeoff=2E=2E=2E=2E


Regards<= br>
On 26 September 2022 03:19:30 CEST, Dave = Taht via Bloat <bloat@lists=2Ebufferbloat=2Enet> wrote:
Some good counterarguments against FQ a=
nd pacing=2E

https://www=2Eusenix=2Eorg/system/files/nsdi22= -paper-ghasemirahni=2Epdf

--
Sent from my Android device with K-9 Mail=2E Please = excuse my brevity=2E
------THU0QFMIC3AZ0AZ9K70NEHPUA2VDVT--