I won't deny that there are problems with delay based congestion control, but at least some of the same problems also apply to AQM.<br><br>In the presence of greedy UDP flows, AQM would highly bias those flows. Sure, the more packets a flow sends, the more it will have it's packets dropped. But only TCP will lower its congestion window and the greedy UDP flow will happily continue filling the buffer. The effect is that only the UDP flow will use most of the available bandwidth. Yes, the queue will be shorter so the TCP flow will have a lower delay, but it will also have a much lower throughput.<br>
<br>Of course, the same applies for delay based congestion control and the greedy flow will still use most of the bandwidth and in addition the delay will be higher. The point remains that neither AQM nor delay based congestion control can provide a fair solution when there are greedy or stupid flows present. Unless of course there are multiple queues for the different types of flows, but yeah, where's that pink pony? Let's make it a unicorn while we're at it. <br>
<br type="attribution"><div class="gmail_quote"><div>And yes, there are much more endpoints than switches/routers. But I imagine they are also much more homogeneous. I could be wrong about this since I don't know too much about network equipment used by ISPs. Either way, it seems to me that most endpoints are running either Windows, Linux (or Android), a BSD variant or something made by Apple. And I would imagine most embedded systems (the remaining endpoints that don't run a consumer OS) aren't connected directly to the internet and won't be able to wreak much havoc. Consumer operating systems are regularly updated as it is, so sneaking in a new TCP variant should be *relatively* easy. Again, I might be wrong, but these are just my thoughts.<br>
<br>I short, I'm not saying there are no problems, but I'm saying it might be easy to ignore the idea as ineffective too quickly.<br><br>Kind regards,<br>Maarten de Vries<br><br> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Thu, 21 Mar 2013 07:21:52 +1100<br>
grenville armitage <<a href="mailto:garmitage@swin.edu.au" target="_blank">garmitage@swin.edu.au</a>> wrote:<br>
<br>
><br>
><br>
> On 03/21/2013 02:36, Steffan Norberhuis wrote:<br>
> > Hello Everyone,<br>
> ><br>
> > For a project for the Delft Technical University myself and 3<br>
> > students are writing a review paper on the buffer bloat problem and<br>
> > its possible solutions.<br>
><br>
> My colleagues have been dabbling with delay-based CC algorithms,<br>
> with FreeBSD implementations (<a href="http://caia.swin.edu.au/urp/newtcp/" target="_blank">http://caia.swin.edu.au/urp/newtcp/</a>)<br>
> if that's of any interest.<br>
><br>
> Some thoughts:<br>
><br>
> - When delay-based TCPs share bottlenecks with loss-based TCPs,<br>
> the delay-based TCPs are punished. Hard. They back-off,<br>
> as queuing delay builds, while the loss-based flow(s)<br>
> blissfully continue to push the queue to full (drop).<br>
> Everyone sharing the bottleneck sees latency fluctuations,<br>
> bounded by the bottleneck queue's effective 'length' (set<br>
> by physical RAM limits or operator-configurable threshold).<br>
><br>
> - The previous point suggests perhaps a hybrid TCP which uses<br>
> delay-based control, but switches (briefly) to loss-based<br>
> control if it detects the bottleneck queue is being<br>
> hammered by other, loss-based TCP flows. Challenging<br>
> questions arise as to what triggers switching between<br>
> delay-based and loss-based modes.<br>
><br>
> - Reducing a buffer's length requires meddling with the<br>
> bottleneck(s) (new firmware or new devices). Deploying<br>
> delay-based TCPs requires meddling with endpoints (OS<br>
> upgrade/patch). Many more of the latter than the former.<br>
><br>
> - But yes, in self-contained networks where the end hosts can all<br>
> be told to run a delay-based CC algorithm, delay-based CC<br>
> can mitigate the impact of bloated buffers in your bottleneck<br>
> network devices. Such homogeneous environments do exist, but<br>
> the Internet is quite different.<br>
><br>
> - Alternatively, if one could classify delay-based CC flows into one<br>
> queue and loss-based CC flows into another queue at each<br>
> bottleneck, the first point above might not be such a problem.<br>
> I also want a pink pony ;) (Of course, once we're considering<br>
> tweak the bottlenecks with classifiers and multiple queues,<br>
> might as continue the surgery and reduce the bloated buffers too.)<br>
<br>
_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net" target="_blank">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloa</a>
</blockquote></div>
<br></blockquote></div><br>