General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Jim Gettys <jg@freedesktop.org>
To: "Bill Ver Steeg (versteb)" <versteb@cisco.com>
Cc: bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in
Date: Wed, 22 Apr 2015 10:53:37 -0700	[thread overview]
Message-ID: <CAGhGL2A1se847yZ6sdevLPTDAfsWNJjD-CT40vRZYpcBWfxVGA@mail.gmail.com> (raw)
In-Reply-To: <AE7F97DB5FEE054088D82E836BD15BE9319C1CD3@xmb-aln-x05.cisco.com>

[-- Attachment #1: Type: text/plain, Size: 5040 bytes --]

Actually, fq_codel's sparse flow optimization provides a pretty strong
incentive for pacing traffic.

If your TCP traffic is well paced, and is running at a rate below that of
the bottleneck, then it will not build a queue.

It will then be recognized as a "good guy" flow, and scheduled
preferentially against other TCP flows that do build a queue (which is what
happens today, with TCP's without pacing).
                                                  - Jim


On Wed, Apr 22, 2015 at 10:05 AM, Bill Ver Steeg (versteb) <
versteb@cisco.com> wrote:

> Actually, pacing does provide SOME benefit even when there are AQM schemes
> in place. Sure, the primary benefit of pacing is in the presence of legacy
> buffer management algorithms, but there are some additional benefits when
> used in conjunction with a modern queue management scheme.
>
> Let's conduct a mind experiment with two rate adaptive flows competing at
> a bottleneck neck on a very simple one hop network with very short RTT. The
> results map into your topology of choice. As in most rate adaptive
> protocols, there are N rates available to the receiver. The receiver
> fetches a chunk of data, observes the receive rate and decides the
> resolution of the next chunk of data that it fetches. It up shifts and down
> shifts based on perceived network conditions.
>
>
> First, the unpaced version
> On the idle link, let's start both flows - with a small offset in time
> between them. The first flow will HTTP get a small chunk of data at a very
> low resolution, and will get the data at the line rate. The second one will
> then get a small chunk of data at a very low resolution, and will also get
> the data at the line rate. What will each adaptive bitrate algorithm think
> the available BW is? They will both think they can run at the line
> rate...... They both decide to upshift to a higher rate. Maybe they go to
> the next resolution up, maybe they go to just below the line rate. Rinse
> and repeat at various rates, with the square wave receiving patterns
> overlapping anywhere from 0% to 100%. When the flows do not 100% overlap in
> time, each client overestimates the link capacity. So,  the apparent rate
> (as seen by the adaptation logic on the client) is misleading.
>
> Now the paced version
> On the idle link, let's start both flows- with a small offset in time
> between them. The first flow gets a small chunk of data at a very low
> resolution, but the data is paced at certain rate (could be the next higher
> rate in the manifest file for this asset, could be the highest rate in the
> manifest file, could be 1.x times one of these rates, could be some other
> rate....). The second one will then get a small chunk of data at a very low
> resolution, also  at a deterministic paced rate.  The data will arrive at
> the client at either the rate they requested or slower. The flows tend to
> be dispersed in time, and "defend their turf" a bit. The likelihood of a
> gross mis-estimation of spare link capacity is decreased.
>
> Note that this is definitely a second-order effect, and one has to think
> about how such flows compete against the more aggressive "send as fast as
> you can" algorithms. It seems that there would be a control theory approach
> to solving this problem. Hopefully somebody with better math skills then me
> could quantify the effects.
>
> There is a bit of game theory mixed into the technical analysis. I have
> been playing around with this a bit, and it is (at the very least) quite
> interesting....
>
>
> Bill Ver Steeg
> DISTINGUISHED ENGINEER
> versteb@cisco.com
>
>
>
>
>
>
>
>
>
>
>
> -----Original Message-----
> From: bloat-bounces@lists.bufferbloat.net [mailto:
> bloat-bounces@lists.bufferbloat.net] On Behalf Of Steinar H. Gunderson
> Sent: Wednesday, April 22, 2015 12:00 PM
> To: luca.muscariello@orange.com
> Cc: bloat
> Subject: Re: [Bloat] RE : DSLReports Speed Test has latency measurement
> built-in
>
> On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com
> wrote:
> > BTW if a paced flow from Google shares a bloated buffer with a non
> > paced flow from a non Google server,  doesn't this turn out to be a
> > performance penalty for the paced flow?
>
> Nope. The paced flow puts less strain on the buffer (and hooray for that),
> which is a win no matter if the buffer is contended or not.
>
> > fq_codel gives incentives to do pacing but if it's not deployed what's
> > the performance gain of using pacing?
>
> fq_codel doesn't give any specific incentive to do pacing. In fact, if
> absolutely all devices on your path would use fq_codel and have adequate
> buffers, I believe pacing would be largely a no-op.
>
> /* Steinar */
> --
> Homepage: http://www.sesse.net/
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 6516 bytes --]

  reply	other threads:[~2015-04-22 17:53 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-22 17:05 Bill Ver Steeg (versteb)
2015-04-22 17:53 ` Jim Gettys [this message]
2015-04-22 17:56   ` Bill Ver Steeg (versteb)
2015-04-22 18:34   ` Eric Dumazet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGhGL2A1se847yZ6sdevLPTDAfsWNJjD-CT40vRZYpcBWfxVGA@mail.gmail.com \
    --to=jg@freedesktop.org \
    --cc=bloat@lists.bufferbloat.net \
    --cc=versteb@cisco.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox