<div dir="ltr"><div><div dir="auto">I think that this is a very good comment to the discussion at the defense about the comparison between </div><div dir="auto">SFQ with longest queue drop and FQ_Codel.</div></div><div dir="auto"><br></div><div dir="auto">A congestion controlled protocol such as TCP or others, including QUIC, LEDBAT and so on</div><div dir="auto">need at least the BDP in the transmission queue to get full link efficiency, i.e. the queue never empties out.</div><div dir="auto">This gives rule of thumbs to size buffers which is also very practical and thanks to flow isolation becomes very accurate.</div><div dir="auto"><br></div><div dir="auto">Which is: </div><div dir="auto"><br></div><div dir="auto">1) find a way to keep the number of backlogged flows at a reasonable value. </div><div dir="auto">This largely depends on the minimum fair rate an application may need in the long term.</div><div dir="auto">We discussed a little bit of available mechanisms to achieve that in the literature.</div><div dir="auto"><br></div><div dir="auto">2) fix the largest RTT you want to serve at full utilization and size the buffer using BDP * N_backlogged. </div><div dir="auto">Or the other way round: check how much memory you can use </div><div dir="auto">in the router/line card/device and for a fixed N, compute the largest RTT you can serve at full utilization. </div><div dir="auto"><br></div><div dir="auto">3) there is still some memory to dimension for sparse flows in addition to that, but this is not based on BDP. </div><div dir="auto">It is just enough to compute the total utilization of sparse flows and use the same simple model Toke has used </div><div dir="auto">to compute the (de)prioritization probability.</div><div dir="auto"><br></div><div dir="auto">This procedure would allow to size FQ_codel but also SFQ.</div><div>It would be interesting to compare the two under this buffer sizing. </div><div>It would also be interesting to compare another mechanism that we have mentioned during the defense</div><div>which is AFD + a sparse flow queue. Which is, BTW, already available in Cisco nexus switches for data centres.</div><div><br></div><div>I think that the the codel part would still provide the ECN feature, that all the others cannot have.</div><div>However the others, the last one especially can be implemented in silicon with reasonable cost.</div><div><br></div><div><br></div><div><br></div><div dir="auto"><br></div></div><div><br><div class="gmail_quote"><div dir="ltr">On Mon 26 Nov 2018 at 22:30, Jonathan Morton <<a href="mailto:chromatix99@gmail.com" target="_blank">chromatix99@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">> On 26 Nov, 2018, at 9:08 pm, Pete Heist <<a href="mailto:pete@heistp.net" target="_blank">pete@heistp.net</a>> wrote:<br>
> <br>
> So I just thought to continue the discussion- when does the CoDel part of fq_codel actually help in the real world?<br>
<br>
Fundamentally, without Codel the only limits on the congestion window would be when the sender or receiver hit configured or calculated rwnd and cwnd limits (the rwnd is visible on the wire and usually chosen to be large enough to be a non-factor), or when the queue overflows. Large windows require buffer memory in both sender and receiver, increasing costs on the sender in particular (who typically has many flows to manage per machine).<br>
<br>
Queue overflow tends to result in burst loss and head-of-line blocking in the receiver, which is visible to the user as a pause and subsequent jump in the progress of their download, accompanied by a major fluctuation in the estimated time to completion. The lost packets also consume capacity upstream of the bottleneck which does not contribute to application throughput. These effects are independent of whether overflow dropping occurs at the head or tail of the bottleneck queue, though recovery occurs more quickly (and fewer packets might be lost) if dropping occurs from the head of the queue.<br>
<br>
>From a pure throughput-efficiency standpoint, Codel allows using ECN for congestion signalling instead of packet loss, potentially eliminating packet loss and associated lead-of-line blocking entirely. Even without ECN, the actual cwnd is kept near the minimum necessary to satisfy the BDP of the path, reducing memory requirements and significantly shortening the recovery time of each loss cycle, to the point where the end-user may not notice that delivery is not perfectly smooth, and implementing accurate completion time estimators is considerably simplified.<br>
<br>
An important use-case is where two sequential bottlenecks exist on the path, the upstream one being only slightly higher capacity but lacking any queue management at all. This is presently common in cases where home CPE implements inbound shaping on a generic ISP last-mile link. In that case, without Codel running on the second bottleneck, traffic would collect in the first bottleneck's queue as well, greatly reducing the beneficial effects of FQ implemented on the second bottleneck. In this topology, the overall effect is inter-flow as well as intra-flow.<br>
<br>
The combination of Codel with FQ is done in such a way that a separate instance of Codel is implemented for each flow. This means that congestion signals are only sent to flows that require them, and non-saturating flows are unmolested. This makes the combination synergistic, where each component offers an improvement to the behaviour of the other.<br>
<br>
- Jonathan Morton<br>
<br>
_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</blockquote></div></div>