[Bloat] Solving bufferbloat with TCP using packet delay

Simon Barber simon at superduper.net
Wed Apr 3 20:10:22 EDT 2013


Is your TCP mod available? This would be useful on Android phones, to 
avoid background downloads (e.g. updates) from disrupting applications 
(Skype).

Simon


On 03/20/2013 06:01 PM, Jonathan Morton wrote:
> One beneficial approach is to focus on the receive side rather than the
> send side. It is possible to implement a delay based algorithm there,
> where it will coexist naturally with a loss based system on the send
> side, and also with AQM and FQ at the bottleneck link if present.
>
> I did this to make the behaviour of a 3G modem tolerable, which was
> exhibiting extreme (tens of seconds) delays on the downlink through the
> traffic shaper on the provider side. The algorithm simply combined the
> latency measurement with the current receive window size to calculate
> bandwidth, then chose a new receive window size based on that. It worked
> sufficiently well.
>
> The approach is a logical development of receive window sizing
> algorithms which simply measure how long and fat the network is, and
> size the window to encompass that statistic. In fact I implemented it by
> modifying the basic algorithm in Linux, rather than adding a new module.
>
> - Jonathan Morton
>
> On Mar 21, 2013 1:16 AM, "Stephen Hemminger" <stephen at networkplumber.org
> <mailto:stephen at networkplumber.org>> wrote:
>
>     On Thu, 21 Mar 2013 07:21:52 +1100
>     grenville armitage <garmitage at swin.edu.au
>     <mailto:garmitage at swin.edu.au>> wrote:
>
>      >
>      >
>      > On 03/21/2013 02:36, Steffan Norberhuis wrote:
>      > > Hello Everyone,
>      > >
>      > > For a project for the Delft Technical University myself and 3
>      > > students are writing a review paper on the buffer bloat problem and
>      > > its possible solutions.
>      >
>      > My colleagues have been dabbling with delay-based CC algorithms,
>      > with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
>      > if that's of any interest.
>      >
>      > Some thoughts:
>      >
>      >   - When delay-based TCPs share bottlenecks with loss-based TCPs,
>      >       the delay-based TCPs are punished. Hard. They back-off,
>      >       as queuing delay builds, while the loss-based flow(s)
>      >       blissfully continue to push the queue to full (drop).
>      >       Everyone sharing the bottleneck sees latency fluctuations,
>      >       bounded by the bottleneck queue's effective 'length' (set
>      >       by physical RAM limits or operator-configurable threshold).
>      >
>      >   - The previous point suggests perhaps a hybrid TCP which uses
>      >       delay-based control, but switches (briefly) to loss-based
>      >       control if it detects the bottleneck queue is being
>      >       hammered by other, loss-based TCP flows. Challenging
>      >       questions arise as to what triggers switching between
>      >       delay-based and loss-based modes.
>      >
>      >   - Reducing a buffer's length requires meddling with the
>      >       bottleneck(s) (new firmware or new devices). Deploying
>      >       delay-based TCPs requires meddling with endpoints (OS
>      >       upgrade/patch). Many more of the latter than the former.
>      >
>      >   - But yes, in self-contained networks where the end hosts can all
>      >       be told to run a delay-based CC algorithm, delay-based CC
>      >       can mitigate the impact of bloated buffers in your bottleneck
>      >       network devices. Such homogeneous environments do exist, but
>      >       the Internet is quite different.
>      >
>      >   - Alternatively, if one could classify delay-based CC flows
>     into one
>      >       queue and loss-based CC flows into another queue at each
>      >       bottleneck, the first point above might not be such a problem.
>      >       I also want a pink pony ;)  (Of course, once we're considering
>      >       tweak the bottlenecks with classifiers and multiple queues,
>      >       might as continue the surgery and reduce the bloated
>     buffers too.)
>
>     Everyone has to go through the phase of thinking
>        "it can't be that hard, I can invent a better TCP congestion
>     algorithm"
>     But it is hard, and the delay based algorithms are fundamentally
>     flawed because they see reverse path delay and cross traffic as false
>     positives.  The hybrid ones all fall back to loss under "interesting
>     times" so they really don't buy much.
>
>     Really not convinced that Bufferbloat will be solved by TCP.
>     You can make a TCP algorithm that causes worse latency than Cubic or
>     Reno
>     very easily. But doing better is hard, especially since TCP really
>     can't assume much about its underlying network. There maybe random
>     delays and packet loss (wireless), there maybe spikes in RTT and
>     sessions maybe long or short lived. And you can't assume the whole
>     world is running your algorithm.
>
>     _______________________________________________
>     Bloat mailing list
>     Bloat at lists.bufferbloat.net <mailto:Bloat at lists.bufferbloat.net>
>     https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>



More information about the Bloat mailing list