<div dir="ltr">I don't think that this feature really hurts TCP.<div>TCP is robust to that in any case. Even if there is avg RTT increase and stddev RTT increase.</div><div><br></div><div>And, I agree that what is more important is the performance of sparse flows, which is not affected by this feature.</div><div><br></div><div>There is one little thing that might appear negligible, but it is not from my point of view, </div><div>which is about giving incentives to transport end-points</div><div>to behaves in the right way. For instance a transport end-point that sends traffic using pacing should be considered as</div><div>behaving better than a transport end-point that sends in burst. And get reward for that.</div><div><br></div><div>Flow isolation creates incentives to pace transmissions and so create less queueing in the network.</div><div>This feature reduces the level of that incentive.</div><div>I am not saying that it eliminates the incentive, because there is still flow isolation, but it makes it less</div><div>effective. If you send less bursts you dont get lower latency.</div><div><br></div><div>When I say transport end-point I don't only think toTCP but also QUIC and all other possible TCPs </div><div>as we all know TCP is a variety of protocols.</div><div><br></div><div>But I understand Jonathan's point.</div><div><br></div><div>Luca</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 19, 2018 at 12:33 PM, Toke Høiland-Jørgensen <span dir="ltr"><<a href="mailto:toke@toke.dk" target="_blank">toke@toke.dk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">Jonathan Morton <<a href="mailto:chromatix99@gmail.com">chromatix99@gmail.com</a>> writes:<br>
<br>
>>>> your solution significantly hurts performance in the common case<br>
>>> <br>
>>> I'm sorry - did someone actually describe such a case? I must have<br>
>>> missed it.<br>
>> <br>
>> I started this whole thread by pointing out that this behaviour results<br>
>> in the delay of the TCP flows scaling with the number of active flows;<br>
>> and that for 32 active flows (on a 10Mbps link), this results in the<br>
>> latency being three times higher than for FQ-CoDel on the same link.<br>
><br>
> Okay, so intra-flow latency is impaired for bulk flows sharing a<br>
> relatively low-bandwidth link. That's a metric which few people even<br>
> know how to measure for bulk flows, though it is of course important<br>
> for sparse flows. I was hoping you had a common use-case where<br>
> *sparse* flow latency was impacted, in which case we could actually<br>
> discuss it properly.<br>
><br>
> But *inter-flow* latency is not impaired, is it? Nor intra-sparse-flow<br>
> latency? Nor packet loss, which people often do measure (or at least<br>
> talk about measuring) - quite the opposite? Nor goodput, which people<br>
> *definitely* measure and notice, and is influenced more strongly by<br>
> packet loss when in ingress mode?<br>
<br>
</span>As I said, I'll run more tests and post more data once I have time.<br>
<span class=""><br>
> The measurement you took had a baseline latency in the region of 60ms.<br>
<br>
</span>The baseline link latency is 50 ms; which is sorta what you'd expect<br>
from a median non-CDN'en internet connection.<br>
<span class=""><br>
> That's high enough for a couple of packets per flow to be in flight<br>
> independently of the bottleneck queue.<br>
<br>
</span>Yes. As is the case for most flows going over the public internet...<br>
<span class=""><br>
> I would take this argument more seriously if a use-case that mattered<br>
> was identified.<br>
<br>
</span>Use cases where intra-flow latency matters, off the top of my head:<br>
<br>
- Real-time video with congestion response<br>
- Multiple connections multiplexed over a single flow (HTTP/2 or<br>
QUIC-style)<br>
- Anything that behaves more sanely than TCP at really low bandwidths.<br>
<br>
But yeah, you're right, no one uses any of those... /s<br>
<span class=""><br>
> So far, I can't even see a coherent argument for making this tweak<br>
> optional (which is of course possible), let alone removing it<br>
> entirely; we only have a single synthetic benchmark which shows one<br>
> obscure metric move in the "wrong" direction, versus a real use-case<br>
> identified by an actual user in which this configuration genuinely<br>
> helps.<br>
<br>
</span>And I've been trying to explain why you are the one optimising for<br>
pathological cases at the expense of the common case.<br>
<br>
But I don't think we are going to agree based on a theoretical<br>
discussion. So let's just leave this and I'll return with some data once<br>
I've had a chance to run some actual tests of the different use cases.<br>
<span class="HOEnZb"><font color="#888888"><br>
-Toke<br>
</font></span></blockquote></div><br></div>