[Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in

Bill Ver Steeg (versteb) versteb at cisco.com
Wed Apr 22 13:56:46 EDT 2015


Jim-

Agreed.

So, I amend my comments to say that there are SEVERAL benefits to pacing – even when using modern queue management algorithms.

Bvs


[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]

Bill Ver Steeg
DISTINGUISHED ENGINEER
versteb at cisco.com












From: gettysjim at gmail.com [mailto:gettysjim at gmail.com] On Behalf Of Jim Gettys
Sent: Wednesday, April 22, 2015 1:54 PM
To: Bill Ver Steeg (versteb)
Cc: Steinar H. Gunderson; luca.muscariello at orange.com; bloat
Subject: Re: [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in

Actually, fq_codel's sparse flow optimization provides a pretty strong incentive for pacing traffic.

If your TCP traffic is well paced, and is running at a rate below that of the bottleneck, then it will not build a queue.

It will then be recognized as a "good guy" flow, and scheduled preferentially against other TCP flows that do build a queue (which is what happens today, with TCP's without pacing).
                                                  - Jim


On Wed, Apr 22, 2015 at 10:05 AM, Bill Ver Steeg (versteb) <versteb at cisco.com<mailto:versteb at cisco.com>> wrote:
Actually, pacing does provide SOME benefit even when there are AQM schemes in place. Sure, the primary benefit of pacing is in the presence of legacy buffer management algorithms, but there are some additional benefits when used in conjunction with a modern queue management scheme.

Let's conduct a mind experiment with two rate adaptive flows competing at a bottleneck neck on a very simple one hop network with very short RTT. The results map into your topology of choice. As in most rate adaptive protocols, there are N rates available to the receiver. The receiver fetches a chunk of data, observes the receive rate and decides the resolution of the next chunk of data that it fetches. It up shifts and down shifts based on perceived network conditions.


First, the unpaced version
On the idle link, let's start both flows - with a small offset in time between them. The first flow will HTTP get a small chunk of data at a very low resolution, and will get the data at the line rate. The second one will then get a small chunk of data at a very low resolution, and will also get the data at the line rate. What will each adaptive bitrate algorithm think the available BW is? They will both think they can run at the line rate...... They both decide to upshift to a higher rate. Maybe they go to the next resolution up, maybe they go to just below the line rate. Rinse and repeat at various rates, with the square wave receiving patterns overlapping anywhere from 0% to 100%. When the flows do not 100% overlap in time, each client overestimates the link capacity. So,  the apparent rate (as seen by the adaptation logic on the client) is misleading.

Now the paced version
On the idle link, let's start both flows- with a small offset in time between them. The first flow gets a small chunk of data at a very low resolution, but the data is paced at certain rate (could be the next higher rate in the manifest file for this asset, could be the highest rate in the manifest file, could be 1.x times one of these rates, could be some other rate....). The second one will then get a small chunk of data at a very low resolution, also  at a deterministic paced rate.  The data will arrive at the client at either the rate they requested or slower. The flows tend to be dispersed in time, and "defend their turf" a bit. The likelihood of a gross mis-estimation of spare link capacity is decreased.

Note that this is definitely a second-order effect, and one has to think about how such flows compete against the more aggressive "send as fast as you can" algorithms. It seems that there would be a control theory approach to solving this problem. Hopefully somebody with better math skills then me could quantify the effects.

There is a bit of game theory mixed into the technical analysis. I have been playing around with this a bit, and it is (at the very least) quite interesting....


Bill Ver Steeg
DISTINGUISHED ENGINEER
versteb at cisco.com<mailto:versteb at cisco.com>











-----Original Message-----
From: bloat-bounces at lists.bufferbloat.net<mailto:bloat-bounces at lists.bufferbloat.net> [mailto:bloat-bounces at lists.bufferbloat.net<mailto:bloat-bounces at lists.bufferbloat.net>] On Behalf Of Steinar H. Gunderson
Sent: Wednesday, April 22, 2015 12:00 PM
To: luca.muscariello at orange.com<mailto:luca.muscariello at orange.com>
Cc: bloat
Subject: Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in

On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello at orange.com<mailto:luca.muscariello at orange.com> wrote:
> BTW if a paced flow from Google shares a bloated buffer with a non
> paced flow from a non Google server,  doesn't this turn out to be a
> performance penalty for the paced flow?

Nope. The paced flow puts less strain on the buffer (and hooray for that), which is a win no matter if the buffer is contended or not.

> fq_codel gives incentives to do pacing but if it's not deployed what's
> the performance gain of using pacing?

fq_codel doesn't give any specific incentive to do pacing. In fact, if absolutely all devices on your path would use fq_codel and have adequate buffers, I believe pacing would be largely a no-op.

/* Steinar */
--
Homepage: http://www.sesse.net/
_______________________________________________
Bloat mailing list
Bloat at lists.bufferbloat.net<mailto:Bloat at lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
Bloat at lists.bufferbloat.net<mailto:Bloat at lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/bloat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20150422/80babdcc/attachment-0002.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.jpg
Type: image/jpeg
Size: 2527 bytes
Desc: image003.jpg
URL: <https://lists.bufferbloat.net/pipermail/bloat/attachments/20150422/80babdcc/attachment-0002.jpg>


More information about the Bloat mailing list