<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div dir="auto" class="">What I am missing in this and similar papres are tests what happens if the proposed scheme is actually used quantitatively over the internet...<br class="">The inherent idea seems to be if one would know the available capacity one could 'jump' the cwnd immediately to that window... (ignoring the fact the rwnd typically takes a while to increase accordingly*). My gut feeling tells me this will make dynamics at bottleneck queues even more volatile, not sure whether that will result in an overall better outcome.<br class="">te <br class=""><div dir="auto" class="">Sidenote: this is again a packet pair method with a side helping of "delay" increase measurements (inside the driver stack, so conceptually related to BQL/AQL) so the challenges are all the same.</div><div class=""><br class=""></div><br class="">*) Finally, the rwnd selection module is used to determine whether the value of receiver window (rwnd) embedded in the ACK packet should be ignored, according to the judgement whether it reveals the exhaustion of the receiver’s buffer, thus to remove the restriction of rwnd on slow start acceleration.</div><div dir="auto" class="">Erm, I think this paper should have been rejected on this argument alone... this is exactly the mind set (I know better then my communication partner) that results in a non- or sub-optimally working internet... I wish that those that do not appreciate slow-start would leave their fingers off it.</div><div dir="auto" class="">Not saying that slow-start is perfect, but if you ignore the components that make slow-start effective your replacement likely will not cut it. The fact that slow-strat gradually ramps up the cwin (and pretty aggressively) is one of its features and not a bug, as the alternative of jumping directly to the appropriate capacity for each flow requires an oracle... so a "perfect" solution is clearly out of reach and all we are talking about is different shades of "good enough" (and to repeat myself, whether a solution is good enough does not solely depend on whether the solution if implemented at a single end-node delivers "better" numbers for that end-node but also on its effect on the rest of the network).**</div><div dir="auto" class=""><br class=""></div><div dir="auto" class="">**) I occasionally wish for a tit-for-tat scheduler that is generous at first but will "retaliate" if a flow abuses that generosity...<br class=""><br class=""></div><div dir="auto" class=""><br class=""></div><div dir="auto" class=""><br class=""></div><div dir="auto" class=""><br class=""></div><div dir="auto" class=""><br class=""></div><div class="gmail_quote"><div dir="auto" class="">On 28 December 2023 04:50:59 CET, Dave Taht via Bloat <<a href="mailto:bloat@lists.bufferbloat.net" class="">bloat@lists.bufferbloat.net</a>> wrote:</div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail"><div dir="auto" class="">I am very happy to be seeing various advances in slow start techniques.<br class=""><br class=""><a href="https://www.researchgate.net/profile/Li-Lingang-2/publication/372708933_Small_Chunks_can_Talk_Fast_Bandwidth_Estimation_without_Filling_up_the_Bottleneck_Link/links/64d1a210806a9e4e5cf75162/Small-Chunks-can-Talk-Fast-Bandwidth-Estimation-without-Filling-up-the-Bottleneck-Link.pdf" class="">https://www.researchgate.net/profile/Li-Lingang-2/publication/372708933_Small_Chunks_can_Talk_Fast_Bandwidth_Estimation_without_Filling_up_the_Bottleneck_Link/links/64d1a210806a9e4e5cf75162/Small-Chunks-can-Talk-Fast-Bandwidth-Estimation-without-Filling-up-the-Bottleneck-Link.pdf</a><br class=""><br class=""><br class=""></div></pre></blockquote></div></body></html>