From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-x22b.google.com (mail-qt0-x22b.google.com [IPv6:2607:f8b0:400d:c0d::22b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 5BAAA3CB44 for ; Thu, 19 Apr 2018 07:55:48 -0400 (EDT) Received: by mail-qt0-x22b.google.com with SMTP id d3-v6so5221011qth.8 for ; Thu, 19 Apr 2018 04:55:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=vlaTk38nnTUfkMV/+LDIg7RgwlzqCjkKE4BrGYRGq3M=; b=mawHmx2LJ0sPxdKb7JobJARrciUR3tePQfr6NGrlq8M42Md7Z5CjdzsSeF6kNVWYZA Gn++WYF5AWmFhusZ+zBwQCc2faqR9NmhV4B5Ayjv+rOOdLGoNbr1+LKB0P7Ob7jaukVF Jg8KwhruLwpFXXScqoTFvjH4tGAdeow184jAf61mSjiuymy36e9AeVUcxfKsGlZMVvKG BRWH/sfe6B1l7uqiELmAXfEPhmOf3d37T28f3ee7T+kvIkUoy8R5vKfGxKTWjIQYqzAY kdqNjWcaBKaEAVWnay9SSgYuTB7771yeVO28hILIzZkNPvlDrvVwbjFfWEDd+jfuz7/n bRmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=vlaTk38nnTUfkMV/+LDIg7RgwlzqCjkKE4BrGYRGq3M=; b=n9B1c5hRU3LtvLaMUMrcT2puEQWbM9Ut8tZi2ngpBB5KjATLREjUDSccgqiZjumzRc 6Hj9B+6ClRGgW8EZGBF33+e6hJdMaDzOwvoet4HEpBmaPYGFTwuD6X5IxBui/poJ/fh6 YXtiIck9fLxhYZoKZu34//wzQlcwDUdI2R3A9SnYAeUe2cZruQ8766bQRV94/gIV+ZO5 071RijDpiOb3U2kBXIWLmmehJ5IPpi4t6c88XpzjFeYsXSgZ+rbhMhClukUKiqn/jV2z /xT85yc2+nPHcfTm+6GmetgQbuO/EWiy+FPdZfhn6qgP0wd++EIS1ZTYPD+o5V2gH18K EZQg== X-Gm-Message-State: ALQs6tCaJcKf0lZlG23XR/I6I4/na7pWboXsadOWMLjjshxXz3Qs9K0P dPbJemlh2qds7iqm5WkebsmKDnjrg873dw/22Mtobw== X-Google-Smtp-Source: AB8JxZpqKvEGGzseQ6BHaOnJKMzQ9EH3JpO5aoSYoMSc+8SaUV1mEpsgdjGfmuTKBJX9b+Y16mVF0kk6FUcFPZvzv5k= X-Received: by 2002:ac8:482:: with SMTP id s2-v6mr5739911qtg.144.1524138947684; Thu, 19 Apr 2018 04:55:47 -0700 (PDT) MIME-Version: 1.0 Received: by 10.12.209.134 with HTTP; Thu, 19 Apr 2018 04:55:47 -0700 (PDT) In-Reply-To: <87muxzzdsd.fsf@toke.dk> References: <87vacq419h.fsf@toke.dk> <874lk9533l.fsf@toke.dk> <87604o3get.fsf@toke.dk> <578552B2-5127-451A-AFE8-93AE9BB07368@gmail.com> <87r2nc1taq.fsf@toke.dk> <0BB8B1FD-6A00-49D6-806E-794BD53A449F@gmx.de> <3457DD8E-0292-4802-BD1E-B37771DCADA2@gmail.com> <87fu3s1om2.fsf@toke.dk> <87d0yv1sfz.fsf@toke.dk> <3B813EB8-92E7-402D-8BF5-DD43971155E4@gmail.com> <87y3hjzgvz.fsf@toke.dk> <87muxzzdsd.fsf@toke.dk> From: Luca Muscariello Date: Thu, 19 Apr 2018 13:55:47 +0200 Message-ID: To: =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= Cc: Jonathan Morton , Cake List Content-Type: multipart/alternative; boundary="00000000000090ae83056a3240c2" Subject: Re: [Cake] A few puzzling Cake results X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Apr 2018 11:55:48 -0000 --00000000000090ae83056a3240c2 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable I don't think that this feature really hurts TCP. TCP is robust to that in any case. Even if there is avg RTT increase and stddev RTT increase. And, I agree that what is more important is the performance of sparse flows, which is not affected by this feature. There is one little thing that might appear negligible, but it is not from my point of view, which is about giving incentives to transport end-points to behaves in the right way. For instance a transport end-point that sends traffic using pacing should be considered as behaving better than a transport end-point that sends in burst. And get reward for that. Flow isolation creates incentives to pace transmissions and so create less queueing in the network. This feature reduces the level of that incentive. I am not saying that it eliminates the incentive, because there is still flow isolation, but it makes it less effective. If you send less bursts you dont get lower latency. When I say transport end-point I don't only think toTCP but also QUIC and all other possible TCPs as we all know TCP is a variety of protocols. But I understand Jonathan's point. Luca On Thu, Apr 19, 2018 at 12:33 PM, Toke H=C3=B8iland-J=C3=B8rgensen wrote: > Jonathan Morton writes: > > >>>> your solution significantly hurts performance in the common case > >>> > >>> I'm sorry - did someone actually describe such a case? I must have > >>> missed it. > >> > >> I started this whole thread by pointing out that this behaviour result= s > >> in the delay of the TCP flows scaling with the number of active flows; > >> and that for 32 active flows (on a 10Mbps link), this results in the > >> latency being three times higher than for FQ-CoDel on the same link. > > > > Okay, so intra-flow latency is impaired for bulk flows sharing a > > relatively low-bandwidth link. That's a metric which few people even > > know how to measure for bulk flows, though it is of course important > > for sparse flows. I was hoping you had a common use-case where > > *sparse* flow latency was impacted, in which case we could actually > > discuss it properly. > > > > But *inter-flow* latency is not impaired, is it? Nor intra-sparse-flow > > latency? Nor packet loss, which people often do measure (or at least > > talk about measuring) - quite the opposite? Nor goodput, which people > > *definitely* measure and notice, and is influenced more strongly by > > packet loss when in ingress mode? > > As I said, I'll run more tests and post more data once I have time. > > > The measurement you took had a baseline latency in the region of 60ms. > > The baseline link latency is 50 ms; which is sorta what you'd expect > from a median non-CDN'en internet connection. > > > That's high enough for a couple of packets per flow to be in flight > > independently of the bottleneck queue. > > Yes. As is the case for most flows going over the public internet... > > > I would take this argument more seriously if a use-case that mattered > > was identified. > > Use cases where intra-flow latency matters, off the top of my head: > > - Real-time video with congestion response > - Multiple connections multiplexed over a single flow (HTTP/2 or > QUIC-style) > - Anything that behaves more sanely than TCP at really low bandwidths. > > But yeah, you're right, no one uses any of those... /s > > > So far, I can't even see a coherent argument for making this tweak > > optional (which is of course possible), let alone removing it > > entirely; we only have a single synthetic benchmark which shows one > > obscure metric move in the "wrong" direction, versus a real use-case > > identified by an actual user in which this configuration genuinely > > helps. > > And I've been trying to explain why you are the one optimising for > pathological cases at the expense of the common case. > > But I don't think we are going to agree based on a theoretical > discussion. So let's just leave this and I'll return with some data once > I've had a chance to run some actual tests of the different use cases. > > -Toke > --00000000000090ae83056a3240c2 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I don't think that this feature really hurts TCP.
= TCP is robust to that in any case. Even if there is avg RTT increase and st= ddev RTT increase.

And, I agree that what is more = important is the performance of sparse flows, which is not affected by this= feature.

There is one little thing that might app= ear negligible, but it is not from my point of view,=C2=A0
which = is about giving incentives to transport end-points
to behaves in = the right way. For instance a transport end-point that sends traffic using = pacing should be considered as
behaving better than a transport e= nd-point that sends in burst. And get reward for that.

=
Flow isolation creates incentives to pace transmissions and so create = less queueing in the network.
This feature reduces the level of t= hat incentive.
I am not saying that it eliminates=C2=A0 the incen= tive, because there is still flow isolation, but it makes it less
effective. If you send less bursts you dont get lower latency.
<= br>
When I say transport end-point I don't only think toTCP b= ut also QUIC and all other possible TCPs=C2=A0
as we all know TCP= is a variety of protocols.

But I understand Jonat= han's point.

Luca


On Thu, Apr 19, 201= 8 at 12:33 PM, Toke H=C3=B8iland-J=C3=B8rgensen <toke@toke.dk> wr= ote:
Jonathan Morton <= ;chromatix99@gmail.com> wri= tes:

>>>> your solution significantly hurts performance in the commo= n case
>>>
>>> I'm sorry - did someone actually describe such a case?=C2= =A0 I must have
>>> missed it.
>>
>> I started this whole thread by pointing out that this behaviour re= sults
>> in the delay of the TCP flows scaling with the number of active fl= ows;
>> and that for 32 active flows (on a 10Mbps link), this results in t= he
>> latency being three times higher than for FQ-CoDel on the same lin= k.
>
> Okay, so intra-flow latency is impaired for bulk flows sharing a
> relatively low-bandwidth link. That's a metric which few people ev= en
> know how to measure for bulk flows, though it is of course important > for sparse flows. I was hoping you had a common use-case where
> *sparse* flow latency was impacted, in which case we could actually > discuss it properly.
>
> But *inter-flow* latency is not impaired, is it? Nor intra-sparse-flow=
> latency? Nor packet loss, which people often do measure (or at least > talk about measuring) - quite the opposite? Nor goodput, which people<= br> > *definitely* measure and notice, and is influenced more strongly by > packet loss when in ingress mode?

As I said, I'll run more tests and post more data once I have ti= me.

> The measurement you took had a baseline latency in the region of 60ms.=

The baseline link latency is 50 ms; which is sorta what you'd ex= pect
from a median non-CDN'en internet connection.

> That's high enough for a couple of packets per flow to be in fligh= t
> independently of the bottleneck queue.

Yes. As is the case for most flows going over the public internet...=

> I would take this argument more seriously if a use-case that mattered<= br> > was identified.

Use cases where intra-flow latency matters, off the top of my head:<= br>
- Real-time video with congestion response
- Multiple connections multiplexed over a single flow (HTTP/2 or
=C2=A0 QUIC-style)
- Anything that behaves more sanely than TCP at really low bandwidths.

But yeah, you're right, no one uses any of those... /s

> So far, I can't even see a coherent argument for making this tweak=
> optional (which is of course possible), let alone removing it
> entirely; we only have a single synthetic benchmark which shows one > obscure metric move in the "wrong" direction, versus a real = use-case
> identified by an actual user in which this configuration genuinely
> helps.

And I've been trying to explain why you are the one optimising f= or
pathological cases at the expense of the common case.

But I don't think we are going to agree based on a theoretical
discussion. So let's just leave this and I'll return with some data= once
I've had a chance to run some actual tests of the different use cases.<= br>
-Toke

--00000000000090ae83056a3240c2--