From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [52.28.52.200]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 3B6223B29E for ; Mon, 26 Nov 2018 17:13:02 -0500 (EST) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1543270381; bh=HMJ+pDdUmXVFCVxeZwUDO/R5mfgp5DFC674TWtubTNc=; h=From:To:Subject:In-Reply-To:References:Date:From; b=lgaVH9TdJ/H9/rCnrkqAmj84fYxNmKiZIEhXt8DLC54ZdQP9IszCQQ5N9s7xjQRY3 zb2Zs8pp2u9CR1HVLRgqWoBiANVuZ2cW6+GCWVc3xrFYVs7iGW4K2xr4ZpFyAVdYXR IxNLL/7FpPBzZ6j5Cz0S1YnBlDZFy+Iu1kP7O0DaWqMbD2F5wWmLTFuCrGyyvcGomY nT/kXZDH83LYP0z38LGILnT9Td7JEMvSmgtHnsNC4J/IV9AadAHCV4dhbCrVJm01P9 xLxvaMceCbN9/ybxZMbsCCkCH8vmdsBN7Jg+e9TKlIfx1Sjxt0Y5qSGKeViQxib9fO Zwzrg1TrM6fUg== To: Michael Welzl , bloat In-Reply-To: <38535869-BF61-4FC4-A0FB-96E91CC4F076@ifi.uio.no> References: <65EAC6C1-4688-46B6-A575-A6C7F2C066C5@heistp.net> <38535869-BF61-4FC4-A0FB-96E91CC4F076@ifi.uio.no> Date: Mon, 26 Nov 2018 23:13:00 +0100 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <87a7lvwkr7.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Bloat] when does the CoDel part of fq_codel help in the real world? X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Nov 2018 22:13:02 -0000 Michael Welzl writes: > However, I would like to point out that thesis defense conversations > are meant to be provocative, by design - when I said that CoDel > doesn=E2=80=99t usually help and long queues would be the right thing for= all > applications, I certainly didn=E2=80=99t REALLY REALLY mean that. Just as I don't REALLY REALLY mean that bigger buffers are always better as you so sneakily tricked me into blurting out ;) > The idea was just to be thought provoking - and indeed I found this > interesting: e.g., if you think about a short HTTP/1 connection, a > large buffer just gives it a greater chance to get all packets across, > and the perceived latency from the reduced round-trips after not > dropping anything may in fact be less than with a smaller (or > CoDel=E2=80=99ed) buffer. Yeah, as a thought experiment I think it kinda works for the use case you said: Just dump the entire piece of content into the network, and let it be queued until the receiver catches up. It almost becomes a CDN-in-the-network kind of thing (just needs multicast to serve multiple receivers at once... ;)). Only trouble is that you need infinite queue space to realise it... > BTW, Anna Brunstrom was also very quick to also give me the HTTP/2.0 > example in the break after the defense. Yup, was thinking of HTTP/2 when I said "control data on the same connection as the payload". Can see from Pete's transcript that it didn't come across terribly clearly, though :P=20 > I=E2=80=99ll use the opportunity to tell folks that I was also pretty > impressed with Toke=E2=80=99s thesis as well as his performance at the > defense. Thanks! It's been fun (both the writing and the defending) :) > With that, I wish all the best to all you bloaters out there - thanks > for reducing our queues! Yes, a huge thank you all from me as well; working with the community here has been among my favourite aspects of my thesis work! -Toke