From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id CB0203CB36; Wed, 20 Mar 2019 18:13:01 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1553119980; bh=avL4q6VEQXJ1jn2p3FxRjLpWnhjM0O141WTsA4zpEi4=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=IplMDRwJHB9QMs7vNBW9t//vL/V85eTELNN7BW2fLoQGT9DDKm1ZbT+w7c3avZP2g hY/d9QJbpYQjT4g18+L3DNLESXT5sm7N5+rfx0S6aAtMIppwbSuoQgrMjoLiQ/noUn xBdc03qdH1pqD1Sx4gQPkRl+0yxcAYy5ZA3Ek4xc= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from hms-beagle2.lan ([77.179.189.192]) by mail.gmx.com (mrgmx103 [212.227.17.168]) with ESMTPSA (Nemesis) id 0LbdiB-1gi4nv0Mot-00lEOG; Wed, 20 Mar 2019 23:13:00 +0100 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 11.5 \(3445.9.1\)) From: Sebastian Moeller In-Reply-To: Date: Wed, 20 Mar 2019 23:12:58 +0100 Cc: Jonathan Morton , bloat Content-Transfer-Encoding: quoted-printable Message-Id: References: <1E80578D-A589-4CA0-9015-B03B63042355@gmx.de> <27FA673A-2C4C-4652-943F-33FAA1CF1E83@gmx.de> <1552669283.555112988@apps.rackspace.com> <7029DA80-8B83-4775-8261-A4ADD2CF34C7@akamai.com> <1552846034.909628287@apps.rackspace.com> <5458c216-07b9-5b06-a381-326de49b53e0@bobbriscoe.net> <5C9296E1.4010703@erg.abdn.ac.uk> To: ecn-sane@lists.bufferbloat.net X-Mailer: Apple Mail (2.3445.9.1) X-Provags-ID: V03:K1:Dq6kxfNFcTF3tl159jMRq/LoK1Ii9xdFk9wsNxn6B7C00Gyl93I /jpPe73HeEyKSBoEjqQnXjWslMVSDpCXGYBita6IhRTY0O4gW6ylUYiRnz+xhHJytIzwtSt v1bufZQ/hek+TX5UHDCOMobmaVzR8mr6AiUoV7GXAJh7PydKu4fxNG5/FiRxVKIwnKh4zMR pk+OtHVP5zbiUk5Z73+Hg== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:RX6v35i8jBM=:0G3jYp0TjvKBH4rCGmYaJ2 /MkIcv2oCH1Cwo1NrqmBJ6OKu4yyfm7OenjEuwAt1aIPEzZXAAiHlnegRlCN8OC/0+6U/vax/ avhuwl0+sF2juOjk4pbQwArNOCikE/zVpy+fWB3/QiRGF9vWi52KbpXbmcNIJHB327TfTk3OZ QDRyAUiCUNET7tGx0+OIgxQikMXT2YOFXSlq7s/4CTTFDEcjGwB3L2tU9XOxS1ooHGUSf6PI8 32ZdWv97LQlD/RvZhAe7Gie8CH24ZQMT1cFArU9Eebq0zJwIQ8SbtWZ7SEV8yIPyWh8GycMIf n/4/pGgbNXgmWdKN8vlUxAAw0jEBkYq6L0P6Q5F54riyEbhiaeIok0RSGTVDsEdrcK4aIuxgq 6Pxfod5J3z08k8n4rp1/7Rjf3/7wV4vLGVDhT8uO4PfkemU/ji0WcYGga/u81pIIss5EjCXzL tvhNcV538MXQPWASsB4JC3Rjlmy2IG3TqGzk7zzc/tJ0aGfXQiarL1YMH/LZer/74M4D3Hh99 ST6eEb59Mp2drwpXP+w+MQ34HNguUtkhWw/bwnLy3lH09DDN/uEmLChqHEkWCirmRExbI7t8M sgdd11SkR3J1QNPq2122RU/1Vf5GFOEPYJ0QoOzjUTnKlxjA4ru++9ggr3TT9VIKDX6mNc7Oz 0zY/pMZi/lk/QfHGt7vyCt+u1H5lsuZ2NM9gejXmMGpbL+RmnSAz0l5zrWNFAP8LDGf9p26xW HaSPqhszhnLY5hnxoJgZbI29uPRmq/z10LUXXMpPUyBMX0RWI4rKZ/TihiRKCKHYQjBT55kTn cAx80q04WhgwCnDdcx56hyHh8jxAWZpxypE1D9gTpRYRxozZFyxGunPRzy0ppLzO+h9oxmbjl /te2HFWg1MGjBSLmeXl3Hsqg1Hjgol23PkN61Bvu/hzZfgjx6F2kmnaOWfduqwxuvgW+iICn2 G1K5ksw1Qig== Subject: Re: [Ecn-sane] [Bloat] [tsvwg] [iccrg] Fwd: [tcpPrague] Implementation and experimentation of TCP Prague/L4S hackaton at IETF104 X-BeenThere: ecn-sane@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Discussion of explicit congestion notification's impact on the Internet List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Mar 2019 22:13:02 -0000 Hi ECN-sane, I reduced the CC list as I do not expect this to further the discussion = much, but since I feel I invested too much time reading the L4S RFCs and = papers I still want to air this here to get some feedback. > On Mar 20, 2019, at 21:55, Greg White wrote: >=20 > In normal conditions, L4S offers "Maximize Throughput" + "Minimize = Loss" + "Minimize Latency" all at once. It doesn't require an = application to have to make that false choice (hence the name "Low = Latency Low Loss Scalable throughput"). =20 >=20 > If an application would prefer to "Minimize Cost", then I suppose it = could adjust its congestion control to be less aggressive (assuming it = is congestion controlled). Also, as you point out the LEPHB could be an = option as well. >=20 > What section 4.1 in the dualq draft is referring to is a case where = the system needs to protect against unresponsive, overloading flows in = the low latency queue. In that case something has to give (you can't = ensure low latency & low loss to e.g. a 100 Mbps unresponsive flow = arriving at a 50 Mbps bottleneck). Which somewhat puts the claim "ultra-low queueing latency" (see = https://tools.ietf.org/html/draft-ietf-tsvwg-l4s-arch-03) into = perspective. IMHO, ultra low queueing latency is only going to happen in = steady state if effectively pacing senders spread their sending rates = such that packets arrive nicely spread out already. I note that with = that kind of traffic pattern other AQ would also offer ultra-low = queueing latency... I note that "=E2=80=98Data Centre to the Home=E2=80=99= : Ultra-Low Latency for All" states that they see 20ms queue delay with = a 7ms base link delay @ 40 Mbps, back of the envelope calculations tell = us that at that rate ((40 * 1000^2) / (1538 * 8)) * 0.02 =3D 65 packets = will be paced out of the dualpi2 AQM add 65 more on the L4S side and = queuing delay will be at 40ms. Switching to a pacing and less aggressive = TCP version helps to smooth out the steady-stae bursts, but will do = zilch for transients due to an increase in the number of active flows.=20= I wonder how this is going to behave once we have new flows come in at a = high rate (at 3.251 KHz capacity adding loads at 100 Hz does not seem = like that a heavy load to me especially given) (actually I don't = wonder, the dual-AQM draft indicates that the queue will grow to 250ms = and tail dropping will start). If I would be responsible at IETFI really = would want to see some analysis of resistance against adversarial = traffic patterns before going that route, especially in the light of the = fuzzy classification vie ECT(1). Best Regards Sebastian >=20 > -Greg >=20 >=20 >=20 >=20 > =EF=BB=BFOn 3/20/19, 2:05 PM, "Bloat on behalf of Jonathan Morton" = = wrote: >=20 >> On 20 Mar, 2019, at 9:39 pm, Gorry Fairhurst = wrote: >>=20 >> Concerning "Maximize Throughput", if you don't need scalability to = very high rates, then is your requirement met by TCP-like semantics, as = in TCP with SACK/loss or even better TCP with ABE/ECT(0)? >=20 > My intention with "Maximise Throughput" is to support the = bulk-transfer applications that TCP is commonly used for today. In = Diffserv terminology, you may consider it equivalent to "Best Effort". >=20 > As far as I can see, L4S offers "Maximise Throughput" and "Minimise = Latency" services, but not the other two. >=20 > - Jonathan Morton >=20 > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat >=20 >=20 > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat