From: Luca Muscariello <luca.muscariello@gmail.com>
To: "Toke Høiland-Jørgensen" <toke@toke.dk>
Cc: Jonathan Morton <chromatix99@gmail.com>,
Cake List <cake@lists.bufferbloat.net>
Subject: Re: [Cake] A few puzzling Cake results
Date: Thu, 19 Apr 2018 13:55:47 +0200 [thread overview]
Message-ID: <CAHx=1M6JZ1LBo+X8DQiHtREcGDJA=N1Vg5uqU11=UE-s4QVYEQ@mail.gmail.com> (raw)
In-Reply-To: <87muxzzdsd.fsf@toke.dk>
[-- Attachment #1: Type: text/plain, Size: 4172 bytes --]
I don't think that this feature really hurts TCP.
TCP is robust to that in any case. Even if there is avg RTT increase and
stddev RTT increase.
And, I agree that what is more important is the performance of sparse
flows, which is not affected by this feature.
There is one little thing that might appear negligible, but it is not from
my point of view,
which is about giving incentives to transport end-points
to behaves in the right way. For instance a transport end-point that sends
traffic using pacing should be considered as
behaving better than a transport end-point that sends in burst. And get
reward for that.
Flow isolation creates incentives to pace transmissions and so create less
queueing in the network.
This feature reduces the level of that incentive.
I am not saying that it eliminates the incentive, because there is still
flow isolation, but it makes it less
effective. If you send less bursts you dont get lower latency.
When I say transport end-point I don't only think toTCP but also QUIC and
all other possible TCPs
as we all know TCP is a variety of protocols.
But I understand Jonathan's point.
Luca
On Thu, Apr 19, 2018 at 12:33 PM, Toke Høiland-Jørgensen <toke@toke.dk>
wrote:
> Jonathan Morton <chromatix99@gmail.com> writes:
>
> >>>> your solution significantly hurts performance in the common case
> >>>
> >>> I'm sorry - did someone actually describe such a case? I must have
> >>> missed it.
> >>
> >> I started this whole thread by pointing out that this behaviour results
> >> in the delay of the TCP flows scaling with the number of active flows;
> >> and that for 32 active flows (on a 10Mbps link), this results in the
> >> latency being three times higher than for FQ-CoDel on the same link.
> >
> > Okay, so intra-flow latency is impaired for bulk flows sharing a
> > relatively low-bandwidth link. That's a metric which few people even
> > know how to measure for bulk flows, though it is of course important
> > for sparse flows. I was hoping you had a common use-case where
> > *sparse* flow latency was impacted, in which case we could actually
> > discuss it properly.
> >
> > But *inter-flow* latency is not impaired, is it? Nor intra-sparse-flow
> > latency? Nor packet loss, which people often do measure (or at least
> > talk about measuring) - quite the opposite? Nor goodput, which people
> > *definitely* measure and notice, and is influenced more strongly by
> > packet loss when in ingress mode?
>
> As I said, I'll run more tests and post more data once I have time.
>
> > The measurement you took had a baseline latency in the region of 60ms.
>
> The baseline link latency is 50 ms; which is sorta what you'd expect
> from a median non-CDN'en internet connection.
>
> > That's high enough for a couple of packets per flow to be in flight
> > independently of the bottleneck queue.
>
> Yes. As is the case for most flows going over the public internet...
>
> > I would take this argument more seriously if a use-case that mattered
> > was identified.
>
> Use cases where intra-flow latency matters, off the top of my head:
>
> - Real-time video with congestion response
> - Multiple connections multiplexed over a single flow (HTTP/2 or
> QUIC-style)
> - Anything that behaves more sanely than TCP at really low bandwidths.
>
> But yeah, you're right, no one uses any of those... /s
>
> > So far, I can't even see a coherent argument for making this tweak
> > optional (which is of course possible), let alone removing it
> > entirely; we only have a single synthetic benchmark which shows one
> > obscure metric move in the "wrong" direction, versus a real use-case
> > identified by an actual user in which this configuration genuinely
> > helps.
>
> And I've been trying to explain why you are the one optimising for
> pathological cases at the expense of the common case.
>
> But I don't think we are going to agree based on a theoretical
> discussion. So let's just leave this and I'll return with some data once
> I've had a chance to run some actual tests of the different use cases.
>
> -Toke
>
[-- Attachment #2: Type: text/html, Size: 5242 bytes --]
next prev parent reply other threads:[~2018-04-19 11:55 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-17 9:42 Toke Høiland-Jørgensen
2018-04-17 10:04 ` Luca Muscariello
2018-04-17 10:38 ` Toke Høiland-Jørgensen
2018-04-17 12:05 ` Y
[not found] ` <mailman.225.1523966725.3573.cake@lists.bufferbloat.net>
2018-04-17 12:22 ` Toke Høiland-Jørgensen
2018-04-17 13:16 ` Jonas Mårtensson
2018-04-17 13:50 ` Toke Høiland-Jørgensen
2018-04-17 13:47 ` Luca Muscariello
2018-04-17 13:52 ` Luca Muscariello
2018-04-17 14:25 ` Toke Høiland-Jørgensen
2018-04-17 14:54 ` Luca Muscariello
2018-04-17 15:10 ` Toke Høiland-Jørgensen
2018-04-17 14:03 ` Jonathan Morton
2018-04-17 14:17 ` Toke Høiland-Jørgensen
2018-04-18 11:25 ` Toke Høiland-Jørgensen
2018-04-18 12:21 ` Kevin Darbyshire-Bryant
2018-04-18 12:57 ` Toke Høiland-Jørgensen
2018-04-18 13:13 ` Jonas Mårtensson
2018-04-18 13:21 ` Toke Høiland-Jørgensen
2018-04-18 14:12 ` Jonathan Morton
2018-04-18 14:30 ` Toke Høiland-Jørgensen
2018-04-18 15:03 ` Jonathan Morton
2018-04-18 15:17 ` Sebastian Moeller
2018-04-18 15:58 ` Jonathan Morton
2018-04-18 16:11 ` Toke Høiland-Jørgensen
2018-04-18 16:25 ` Dave Taht
2018-04-18 16:34 ` Georgios Amanakis
2018-04-18 17:10 ` Sebastian Moeller
2018-04-19 7:49 ` Luca Muscariello
2018-04-19 8:11 ` Jonathan Morton
2018-04-19 9:00 ` Toke Høiland-Jørgensen
2018-04-19 9:21 ` Jonathan Morton
2018-04-19 9:26 ` Toke Høiland-Jørgensen
2018-04-19 9:55 ` Jonathan Morton
2018-04-19 10:33 ` Toke Høiland-Jørgensen
2018-04-19 11:55 ` Luca Muscariello [this message]
2018-04-18 16:54 ` Jonathan Morton
2018-04-18 17:02 ` Dave Taht
2018-04-18 18:06 ` Jonas Mårtensson
2018-04-18 18:11 ` Toke Høiland-Jørgensen
2018-04-18 18:16 ` Kevin Darbyshire-Bryant
[not found] ` <mailman.238.1524075384.3573.cake@lists.bufferbloat.net>
2018-04-19 8:31 ` Kevin Darbyshire-Bryant
2018-04-18 18:11 ` Jonas Mårtensson
2018-04-18 19:53 ` David Lang
2018-04-18 21:53 ` Jonathan Morton
2018-04-19 9:22 ` Toke Høiland-Jørgensen
2018-04-19 9:32 ` Jonathan Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAHx=1M6JZ1LBo+X8DQiHtREcGDJA=N1Vg5uqU11=UE-s4QVYEQ@mail.gmail.com' \
--to=luca.muscariello@gmail.com \
--cc=cake@lists.bufferbloat.net \
--cc=chromatix99@gmail.com \
--cc=toke@toke.dk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox