[Bloat] Solving bufferbloat with TCP using packet delay
Stephen Hemminger
stephen at networkplumber.org
Wed Mar 20 19:16:22 EDT 2013
On Thu, 21 Mar 2013 07:21:52 +1100
grenville armitage <garmitage at swin.edu.au> wrote:
>
>
> On 03/21/2013 02:36, Steffan Norberhuis wrote:
> > Hello Everyone,
> >
> > For a project for the Delft Technical University myself and 3
> > students are writing a review paper on the buffer bloat problem and
> > its possible solutions.
>
> My colleagues have been dabbling with delay-based CC algorithms,
> with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
> if that's of any interest.
>
> Some thoughts:
>
> - When delay-based TCPs share bottlenecks with loss-based TCPs,
> the delay-based TCPs are punished. Hard. They back-off,
> as queuing delay builds, while the loss-based flow(s)
> blissfully continue to push the queue to full (drop).
> Everyone sharing the bottleneck sees latency fluctuations,
> bounded by the bottleneck queue's effective 'length' (set
> by physical RAM limits or operator-configurable threshold).
>
> - The previous point suggests perhaps a hybrid TCP which uses
> delay-based control, but switches (briefly) to loss-based
> control if it detects the bottleneck queue is being
> hammered by other, loss-based TCP flows. Challenging
> questions arise as to what triggers switching between
> delay-based and loss-based modes.
>
> - Reducing a buffer's length requires meddling with the
> bottleneck(s) (new firmware or new devices). Deploying
> delay-based TCPs requires meddling with endpoints (OS
> upgrade/patch). Many more of the latter than the former.
>
> - But yes, in self-contained networks where the end hosts can all
> be told to run a delay-based CC algorithm, delay-based CC
> can mitigate the impact of bloated buffers in your bottleneck
> network devices. Such homogeneous environments do exist, but
> the Internet is quite different.
>
> - Alternatively, if one could classify delay-based CC flows into one
> queue and loss-based CC flows into another queue at each
> bottleneck, the first point above might not be such a problem.
> I also want a pink pony ;) (Of course, once we're considering
> tweak the bottlenecks with classifiers and multiple queues,
> might as continue the surgery and reduce the bloated buffers too.)
Everyone has to go through the phase of thinking
"it can't be that hard, I can invent a better TCP congestion algorithm"
But it is hard, and the delay based algorithms are fundamentally
flawed because they see reverse path delay and cross traffic as false
positives. The hybrid ones all fall back to loss under "interesting
times" so they really don't buy much.
Really not convinced that Bufferbloat will be solved by TCP.
You can make a TCP algorithm that causes worse latency than Cubic or Reno
very easily. But doing better is hard, especially since TCP really
can't assume much about its underlying network. There maybe random
delays and packet loss (wireless), there maybe spikes in RTT and
sessions maybe long or short lived. And you can't assume the whole
world is running your algorithm.
More information about the Bloat
mailing list