* [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in
@ 2015-04-22 17:05 Bill Ver Steeg (versteb)
2015-04-22 17:53 ` Jim Gettys
0 siblings, 1 reply; 4+ messages in thread
From: Bill Ver Steeg (versteb) @ 2015-04-22 17:05 UTC (permalink / raw)
To: Steinar H. Gunderson, luca.muscariello; +Cc: bloat
Actually, pacing does provide SOME benefit even when there are AQM schemes in place. Sure, the primary benefit of pacing is in the presence of legacy buffer management algorithms, but there are some additional benefits when used in conjunction with a modern queue management scheme.
Let's conduct a mind experiment with two rate adaptive flows competing at a bottleneck neck on a very simple one hop network with very short RTT. The results map into your topology of choice. As in most rate adaptive protocols, there are N rates available to the receiver. The receiver fetches a chunk of data, observes the receive rate and decides the resolution of the next chunk of data that it fetches. It up shifts and down shifts based on perceived network conditions.
First, the unpaced version
On the idle link, let's start both flows - with a small offset in time between them. The first flow will HTTP get a small chunk of data at a very low resolution, and will get the data at the line rate. The second one will then get a small chunk of data at a very low resolution, and will also get the data at the line rate. What will each adaptive bitrate algorithm think the available BW is? They will both think they can run at the line rate...... They both decide to upshift to a higher rate. Maybe they go to the next resolution up, maybe they go to just below the line rate. Rinse and repeat at various rates, with the square wave receiving patterns overlapping anywhere from 0% to 100%. When the flows do not 100% overlap in time, each client overestimates the link capacity. So, the apparent rate (as seen by the adaptation logic on the client) is misleading.
Now the paced version
On the idle link, let's start both flows- with a small offset in time between them. The first flow gets a small chunk of data at a very low resolution, but the data is paced at certain rate (could be the next higher rate in the manifest file for this asset, could be the highest rate in the manifest file, could be 1.x times one of these rates, could be some other rate....). The second one will then get a small chunk of data at a very low resolution, also at a deterministic paced rate. The data will arrive at the client at either the rate they requested or slower. The flows tend to be dispersed in time, and "defend their turf" a bit. The likelihood of a gross mis-estimation of spare link capacity is decreased.
Note that this is definitely a second-order effect, and one has to think about how such flows compete against the more aggressive "send as fast as you can" algorithms. It seems that there would be a control theory approach to solving this problem. Hopefully somebody with better math skills then me could quantify the effects.
There is a bit of game theory mixed into the technical analysis. I have been playing around with this a bit, and it is (at the very least) quite interesting....
Bill Ver Steeg
DISTINGUISHED ENGINEER
versteb@cisco.com
-----Original Message-----
From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Steinar H. Gunderson
Sent: Wednesday, April 22, 2015 12:00 PM
To: luca.muscariello@orange.com
Cc: bloat
Subject: Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com wrote:
> BTW if a paced flow from Google shares a bloated buffer with a non
> paced flow from a non Google server, doesn't this turn out to be a
> performance penalty for the paced flow?
Nope. The paced flow puts less strain on the buffer (and hooray for that), which is a win no matter if the buffer is contended or not.
> fq_codel gives incentives to do pacing but if it's not deployed what's
> the performance gain of using pacing?
fq_codel doesn't give any specific incentive to do pacing. In fact, if absolutely all devices on your path would use fq_codel and have adequate buffers, I believe pacing would be largely a no-op.
/* Steinar */
--
Homepage: http://www.sesse.net/
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 17:05 [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in Bill Ver Steeg (versteb)
@ 2015-04-22 17:53 ` Jim Gettys
2015-04-22 17:56 ` Bill Ver Steeg (versteb)
2015-04-22 18:34 ` Eric Dumazet
0 siblings, 2 replies; 4+ messages in thread
From: Jim Gettys @ 2015-04-22 17:53 UTC (permalink / raw)
To: Bill Ver Steeg (versteb); +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 5040 bytes --]
Actually, fq_codel's sparse flow optimization provides a pretty strong
incentive for pacing traffic.
If your TCP traffic is well paced, and is running at a rate below that of
the bottleneck, then it will not build a queue.
It will then be recognized as a "good guy" flow, and scheduled
preferentially against other TCP flows that do build a queue (which is what
happens today, with TCP's without pacing).
- Jim
On Wed, Apr 22, 2015 at 10:05 AM, Bill Ver Steeg (versteb) <
versteb@cisco.com> wrote:
> Actually, pacing does provide SOME benefit even when there are AQM schemes
> in place. Sure, the primary benefit of pacing is in the presence of legacy
> buffer management algorithms, but there are some additional benefits when
> used in conjunction with a modern queue management scheme.
>
> Let's conduct a mind experiment with two rate adaptive flows competing at
> a bottleneck neck on a very simple one hop network with very short RTT. The
> results map into your topology of choice. As in most rate adaptive
> protocols, there are N rates available to the receiver. The receiver
> fetches a chunk of data, observes the receive rate and decides the
> resolution of the next chunk of data that it fetches. It up shifts and down
> shifts based on perceived network conditions.
>
>
> First, the unpaced version
> On the idle link, let's start both flows - with a small offset in time
> between them. The first flow will HTTP get a small chunk of data at a very
> low resolution, and will get the data at the line rate. The second one will
> then get a small chunk of data at a very low resolution, and will also get
> the data at the line rate. What will each adaptive bitrate algorithm think
> the available BW is? They will both think they can run at the line
> rate...... They both decide to upshift to a higher rate. Maybe they go to
> the next resolution up, maybe they go to just below the line rate. Rinse
> and repeat at various rates, with the square wave receiving patterns
> overlapping anywhere from 0% to 100%. When the flows do not 100% overlap in
> time, each client overestimates the link capacity. So, the apparent rate
> (as seen by the adaptation logic on the client) is misleading.
>
> Now the paced version
> On the idle link, let's start both flows- with a small offset in time
> between them. The first flow gets a small chunk of data at a very low
> resolution, but the data is paced at certain rate (could be the next higher
> rate in the manifest file for this asset, could be the highest rate in the
> manifest file, could be 1.x times one of these rates, could be some other
> rate....). The second one will then get a small chunk of data at a very low
> resolution, also at a deterministic paced rate. The data will arrive at
> the client at either the rate they requested or slower. The flows tend to
> be dispersed in time, and "defend their turf" a bit. The likelihood of a
> gross mis-estimation of spare link capacity is decreased.
>
> Note that this is definitely a second-order effect, and one has to think
> about how such flows compete against the more aggressive "send as fast as
> you can" algorithms. It seems that there would be a control theory approach
> to solving this problem. Hopefully somebody with better math skills then me
> could quantify the effects.
>
> There is a bit of game theory mixed into the technical analysis. I have
> been playing around with this a bit, and it is (at the very least) quite
> interesting....
>
>
> Bill Ver Steeg
> DISTINGUISHED ENGINEER
> versteb@cisco.com
>
>
>
>
>
>
>
>
>
>
>
> -----Original Message-----
> From: bloat-bounces@lists.bufferbloat.net [mailto:
> bloat-bounces@lists.bufferbloat.net] On Behalf Of Steinar H. Gunderson
> Sent: Wednesday, April 22, 2015 12:00 PM
> To: luca.muscariello@orange.com
> Cc: bloat
> Subject: Re: [Bloat] RE : DSLReports Speed Test has latency measurement
> built-in
>
> On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com
> wrote:
> > BTW if a paced flow from Google shares a bloated buffer with a non
> > paced flow from a non Google server, doesn't this turn out to be a
> > performance penalty for the paced flow?
>
> Nope. The paced flow puts less strain on the buffer (and hooray for that),
> which is a win no matter if the buffer is contended or not.
>
> > fq_codel gives incentives to do pacing but if it's not deployed what's
> > the performance gain of using pacing?
>
> fq_codel doesn't give any specific incentive to do pacing. In fact, if
> absolutely all devices on your path would use fq_codel and have adequate
> buffers, I believe pacing would be largely a no-op.
>
> /* Steinar */
> --
> Homepage: http://www.sesse.net/
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 6516 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 17:53 ` Jim Gettys
@ 2015-04-22 17:56 ` Bill Ver Steeg (versteb)
2015-04-22 18:34 ` Eric Dumazet
1 sibling, 0 replies; 4+ messages in thread
From: Bill Ver Steeg (versteb) @ 2015-04-22 17:56 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
[-- Attachment #1.1: Type: text/plain, Size: 5843 bytes --]
Jim-
Agreed.
So, I amend my comments to say that there are SEVERAL benefits to pacing – even when using modern queue management algorithms.
Bvs
[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]
Bill Ver Steeg
DISTINGUISHED ENGINEER
versteb@cisco.com
From: gettysjim@gmail.com [mailto:gettysjim@gmail.com] On Behalf Of Jim Gettys
Sent: Wednesday, April 22, 2015 1:54 PM
To: Bill Ver Steeg (versteb)
Cc: Steinar H. Gunderson; luca.muscariello@orange.com; bloat
Subject: Re: [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in
Actually, fq_codel's sparse flow optimization provides a pretty strong incentive for pacing traffic.
If your TCP traffic is well paced, and is running at a rate below that of the bottleneck, then it will not build a queue.
It will then be recognized as a "good guy" flow, and scheduled preferentially against other TCP flows that do build a queue (which is what happens today, with TCP's without pacing).
- Jim
On Wed, Apr 22, 2015 at 10:05 AM, Bill Ver Steeg (versteb) <versteb@cisco.com<mailto:versteb@cisco.com>> wrote:
Actually, pacing does provide SOME benefit even when there are AQM schemes in place. Sure, the primary benefit of pacing is in the presence of legacy buffer management algorithms, but there are some additional benefits when used in conjunction with a modern queue management scheme.
Let's conduct a mind experiment with two rate adaptive flows competing at a bottleneck neck on a very simple one hop network with very short RTT. The results map into your topology of choice. As in most rate adaptive protocols, there are N rates available to the receiver. The receiver fetches a chunk of data, observes the receive rate and decides the resolution of the next chunk of data that it fetches. It up shifts and down shifts based on perceived network conditions.
First, the unpaced version
On the idle link, let's start both flows - with a small offset in time between them. The first flow will HTTP get a small chunk of data at a very low resolution, and will get the data at the line rate. The second one will then get a small chunk of data at a very low resolution, and will also get the data at the line rate. What will each adaptive bitrate algorithm think the available BW is? They will both think they can run at the line rate...... They both decide to upshift to a higher rate. Maybe they go to the next resolution up, maybe they go to just below the line rate. Rinse and repeat at various rates, with the square wave receiving patterns overlapping anywhere from 0% to 100%. When the flows do not 100% overlap in time, each client overestimates the link capacity. So, the apparent rate (as seen by the adaptation logic on the client) is misleading.
Now the paced version
On the idle link, let's start both flows- with a small offset in time between them. The first flow gets a small chunk of data at a very low resolution, but the data is paced at certain rate (could be the next higher rate in the manifest file for this asset, could be the highest rate in the manifest file, could be 1.x times one of these rates, could be some other rate....). The second one will then get a small chunk of data at a very low resolution, also at a deterministic paced rate. The data will arrive at the client at either the rate they requested or slower. The flows tend to be dispersed in time, and "defend their turf" a bit. The likelihood of a gross mis-estimation of spare link capacity is decreased.
Note that this is definitely a second-order effect, and one has to think about how such flows compete against the more aggressive "send as fast as you can" algorithms. It seems that there would be a control theory approach to solving this problem. Hopefully somebody with better math skills then me could quantify the effects.
There is a bit of game theory mixed into the technical analysis. I have been playing around with this a bit, and it is (at the very least) quite interesting....
Bill Ver Steeg
DISTINGUISHED ENGINEER
versteb@cisco.com<mailto:versteb@cisco.com>
-----Original Message-----
From: bloat-bounces@lists.bufferbloat.net<mailto:bloat-bounces@lists.bufferbloat.net> [mailto:bloat-bounces@lists.bufferbloat.net<mailto:bloat-bounces@lists.bufferbloat.net>] On Behalf Of Steinar H. Gunderson
Sent: Wednesday, April 22, 2015 12:00 PM
To: luca.muscariello@orange.com<mailto:luca.muscariello@orange.com>
Cc: bloat
Subject: Re: [Bloat] RE : DSLReports Speed Test has latency measurement built-in
On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com<mailto:luca.muscariello@orange.com> wrote:
> BTW if a paced flow from Google shares a bloated buffer with a non
> paced flow from a non Google server, doesn't this turn out to be a
> performance penalty for the paced flow?
Nope. The paced flow puts less strain on the buffer (and hooray for that), which is a win no matter if the buffer is contended or not.
> fq_codel gives incentives to do pacing but if it's not deployed what's
> the performance gain of using pacing?
fq_codel doesn't give any specific incentive to do pacing. In fact, if absolutely all devices on your path would use fq_codel and have adequate buffers, I believe pacing would be largely a no-op.
/* Steinar */
--
Homepage: http://www.sesse.net/
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<mailto:Bloat@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<mailto:Bloat@lists.bufferbloat.net>
https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #1.2: Type: text/html, Size: 13305 bytes --]
[-- Attachment #2: image003.jpg --]
[-- Type: image/jpeg, Size: 2527 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in
2015-04-22 17:53 ` Jim Gettys
2015-04-22 17:56 ` Bill Ver Steeg (versteb)
@ 2015-04-22 18:34 ` Eric Dumazet
1 sibling, 0 replies; 4+ messages in thread
From: Eric Dumazet @ 2015-04-22 18:34 UTC (permalink / raw)
To: Jim Gettys; +Cc: bloat
On Wed, 2015-04-22 at 10:53 -0700, Jim Gettys wrote:
> Actually, fq_codel's sparse flow optimization provides a pretty strong
> incentive for pacing traffic.
>
>
> If your TCP traffic is well paced, and is running at a rate below that
> of the bottleneck, then it will not build a queue.
>
>
> It will then be recognized as a "good guy" flow, and scheduled
> preferentially against other TCP flows that do build a queue (which is
> what happens today, with TCP's without pacing).
> - Jim
>
Well, fq_codel 'sparse flow' is not going to state that a flow sending
one packet every 100 ms is a 'good guy' meaning it needs a boost.
This kind of flow (paced but sending 10 packets per second) will use the
normal RR mechanism.
However, he will get a normal share, compared to some elephant flows.
fq_codel 'boost' is only for first packets of a new flow.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2015-04-22 18:34 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-22 17:05 [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in Bill Ver Steeg (versteb)
2015-04-22 17:53 ` Jim Gettys
2015-04-22 17:56 ` Bill Ver Steeg (versteb)
2015-04-22 18:34 ` Eric Dumazet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox