<p>One beneficial approach is to focus on the receive side rather than the send side. It is possible to implement a delay based algorithm there, where it will coexist naturally with a loss based system on the send side, and also with AQM and FQ at the bottleneck link if present. </p>
<p>I did this to make the behaviour of a 3G modem tolerable, which was exhibiting extreme (tens of seconds) delays on the downlink through the traffic shaper on the provider side. The algorithm simply combined the latency measurement with the current receive window size to calculate bandwidth, then chose a new receive window size based on that. It worked sufficiently well.</p>
<p>The approach is a logical development of receive window sizing algorithms which simply measure how long and fat the network is, and size the window to encompass that statistic. In fact I implemented it by modifying the basic algorithm in Linux, rather than adding a new module. </p>
<p> - Jonathan Morton<br>
</p>
<div class="gmail_quote">On Mar 21, 2013 1:16 AM, "Stephen Hemminger" <<a href="mailto:stephen@networkplumber.org">stephen@networkplumber.org</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Thu, 21 Mar 2013 07:21:52 +1100<br>
grenville armitage <<a href="mailto:garmitage@swin.edu.au">garmitage@swin.edu.au</a>> wrote:<br>
<br>
><br>
><br>
> On 03/21/2013 02:36, Steffan Norberhuis wrote:<br>
> > Hello Everyone,<br>
> ><br>
> > For a project for the Delft Technical University myself and 3<br>
> > students are writing a review paper on the buffer bloat problem and<br>
> > its possible solutions.<br>
><br>
> My colleagues have been dabbling with delay-based CC algorithms,<br>
> with FreeBSD implementations (<a href="http://caia.swin.edu.au/urp/newtcp/" target="_blank">http://caia.swin.edu.au/urp/newtcp/</a>)<br>
> if that's of any interest.<br>
><br>
> Some thoughts:<br>
><br>
> - When delay-based TCPs share bottlenecks with loss-based TCPs,<br>
> the delay-based TCPs are punished. Hard. They back-off,<br>
> as queuing delay builds, while the loss-based flow(s)<br>
> blissfully continue to push the queue to full (drop).<br>
> Everyone sharing the bottleneck sees latency fluctuations,<br>
> bounded by the bottleneck queue's effective 'length' (set<br>
> by physical RAM limits or operator-configurable threshold).<br>
><br>
> - The previous point suggests perhaps a hybrid TCP which uses<br>
> delay-based control, but switches (briefly) to loss-based<br>
> control if it detects the bottleneck queue is being<br>
> hammered by other, loss-based TCP flows. Challenging<br>
> questions arise as to what triggers switching between<br>
> delay-based and loss-based modes.<br>
><br>
> - Reducing a buffer's length requires meddling with the<br>
> bottleneck(s) (new firmware or new devices). Deploying<br>
> delay-based TCPs requires meddling with endpoints (OS<br>
> upgrade/patch). Many more of the latter than the former.<br>
><br>
> - But yes, in self-contained networks where the end hosts can all<br>
> be told to run a delay-based CC algorithm, delay-based CC<br>
> can mitigate the impact of bloated buffers in your bottleneck<br>
> network devices. Such homogeneous environments do exist, but<br>
> the Internet is quite different.<br>
><br>
> - Alternatively, if one could classify delay-based CC flows into one<br>
> queue and loss-based CC flows into another queue at each<br>
> bottleneck, the first point above might not be such a problem.<br>
> I also want a pink pony ;) (Of course, once we're considering<br>
> tweak the bottlenecks with classifiers and multiple queues,<br>
> might as continue the surgery and reduce the bloated buffers too.)<br>
<br>
Everyone has to go through the phase of thinking<br>
"it can't be that hard, I can invent a better TCP congestion algorithm"<br>
But it is hard, and the delay based algorithms are fundamentally<br>
flawed because they see reverse path delay and cross traffic as false<br>
positives. The hybrid ones all fall back to loss under "interesting<br>
times" so they really don't buy much.<br>
<br>
Really not convinced that Bufferbloat will be solved by TCP.<br>
You can make a TCP algorithm that causes worse latency than Cubic or Reno<br>
very easily. But doing better is hard, especially since TCP really<br>
can't assume much about its underlying network. There maybe random<br>
delays and packet loss (wireless), there maybe spikes in RTT and<br>
sessions maybe long or short lived. And you can't assume the whole<br>
world is running your algorithm.<br>
<br>
_______________________________________________<br>
Bloat mailing list<br>
<a href="mailto:Bloat@lists.bufferbloat.net">Bloat@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/bloat" target="_blank">https://lists.bufferbloat.net/listinfo/bloat</a><br>
</blockquote></div>