General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Simon Barber <simon@superduper.net>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] Solving bufferbloat with TCP using packet delay
Date: Wed, 03 Apr 2013 17:10:22 -0700	[thread overview]
Message-ID: <515CC4EE.5050904@superduper.net> (raw)
In-Reply-To: <CAJq5cE1xeMcf=gRa_v4nQE9nL-QontjGkYkOkXSxz2bWaBwrsw@mail.gmail.com>

Is your TCP mod available? This would be useful on Android phones, to 
avoid background downloads (e.g. updates) from disrupting applications 
(Skype).

Simon


On 03/20/2013 06:01 PM, Jonathan Morton wrote:
> One beneficial approach is to focus on the receive side rather than the
> send side. It is possible to implement a delay based algorithm there,
> where it will coexist naturally with a loss based system on the send
> side, and also with AQM and FQ at the bottleneck link if present.
>
> I did this to make the behaviour of a 3G modem tolerable, which was
> exhibiting extreme (tens of seconds) delays on the downlink through the
> traffic shaper on the provider side. The algorithm simply combined the
> latency measurement with the current receive window size to calculate
> bandwidth, then chose a new receive window size based on that. It worked
> sufficiently well.
>
> The approach is a logical development of receive window sizing
> algorithms which simply measure how long and fat the network is, and
> size the window to encompass that statistic. In fact I implemented it by
> modifying the basic algorithm in Linux, rather than adding a new module.
>
> - Jonathan Morton
>
> On Mar 21, 2013 1:16 AM, "Stephen Hemminger" <stephen@networkplumber.org
> <mailto:stephen@networkplumber.org>> wrote:
>
>     On Thu, 21 Mar 2013 07:21:52 +1100
>     grenville armitage <garmitage@swin.edu.au
>     <mailto:garmitage@swin.edu.au>> wrote:
>
>      >
>      >
>      > On 03/21/2013 02:36, Steffan Norberhuis wrote:
>      > > Hello Everyone,
>      > >
>      > > For a project for the Delft Technical University myself and 3
>      > > students are writing a review paper on the buffer bloat problem and
>      > > its possible solutions.
>      >
>      > My colleagues have been dabbling with delay-based CC algorithms,
>      > with FreeBSD implementations (http://caia.swin.edu.au/urp/newtcp/)
>      > if that's of any interest.
>      >
>      > Some thoughts:
>      >
>      >   - When delay-based TCPs share bottlenecks with loss-based TCPs,
>      >       the delay-based TCPs are punished. Hard. They back-off,
>      >       as queuing delay builds, while the loss-based flow(s)
>      >       blissfully continue to push the queue to full (drop).
>      >       Everyone sharing the bottleneck sees latency fluctuations,
>      >       bounded by the bottleneck queue's effective 'length' (set
>      >       by physical RAM limits or operator-configurable threshold).
>      >
>      >   - The previous point suggests perhaps a hybrid TCP which uses
>      >       delay-based control, but switches (briefly) to loss-based
>      >       control if it detects the bottleneck queue is being
>      >       hammered by other, loss-based TCP flows. Challenging
>      >       questions arise as to what triggers switching between
>      >       delay-based and loss-based modes.
>      >
>      >   - Reducing a buffer's length requires meddling with the
>      >       bottleneck(s) (new firmware or new devices). Deploying
>      >       delay-based TCPs requires meddling with endpoints (OS
>      >       upgrade/patch). Many more of the latter than the former.
>      >
>      >   - But yes, in self-contained networks where the end hosts can all
>      >       be told to run a delay-based CC algorithm, delay-based CC
>      >       can mitigate the impact of bloated buffers in your bottleneck
>      >       network devices. Such homogeneous environments do exist, but
>      >       the Internet is quite different.
>      >
>      >   - Alternatively, if one could classify delay-based CC flows
>     into one
>      >       queue and loss-based CC flows into another queue at each
>      >       bottleneck, the first point above might not be such a problem.
>      >       I also want a pink pony ;)  (Of course, once we're considering
>      >       tweak the bottlenecks with classifiers and multiple queues,
>      >       might as continue the surgery and reduce the bloated
>     buffers too.)
>
>     Everyone has to go through the phase of thinking
>        "it can't be that hard, I can invent a better TCP congestion
>     algorithm"
>     But it is hard, and the delay based algorithms are fundamentally
>     flawed because they see reverse path delay and cross traffic as false
>     positives.  The hybrid ones all fall back to loss under "interesting
>     times" so they really don't buy much.
>
>     Really not convinced that Bufferbloat will be solved by TCP.
>     You can make a TCP algorithm that causes worse latency than Cubic or
>     Reno
>     very easily. But doing better is hard, especially since TCP really
>     can't assume much about its underlying network. There maybe random
>     delays and packet loss (wireless), there maybe spikes in RTT and
>     sessions maybe long or short lived. And you can't assume the whole
>     world is running your algorithm.
>
>     _______________________________________________
>     Bloat mailing list
>     Bloat@lists.bufferbloat.net <mailto:Bloat@lists.bufferbloat.net>
>     https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

  parent reply	other threads:[~2013-04-04  0:10 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-20 15:36 Steffan Norberhuis
2013-03-20 15:55 ` Dave Taht
2013-03-20 16:12 ` Oliver Hohlfeld
2013-03-20 16:35 ` Michael Richardson
2013-03-20 20:21 ` grenville armitage
2013-03-20 23:16   ` Stephen Hemminger
2013-03-21  1:01     ` Jonathan Morton
2013-03-26 13:10       ` Maarten de Vries
2013-03-26 13:24         ` Jonathan Morton
2013-04-04  0:10       ` Simon Barber [this message]
2013-03-21  8:26     ` Hagen Paul Pfeifer
2013-04-03 18:16     ` Juliusz Chroboczek
2013-04-03 18:23       ` Hagen Paul Pfeifer
2013-04-03 19:35         ` Juliusz Chroboczek
2013-04-03 18:14 ` Juliusz Chroboczek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=515CC4EE.5050904@superduper.net \
    --to=simon@superduper.net \
    --cc=bloat@lists.bufferbloat.net \
    --cc=chromatix99@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox