From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-x232.google.com (mail-wi0-x232.google.com [IPv6:2a00:1450:400c:c05::232]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 1EC4D21F2EE for ; Wed, 22 Apr 2015 10:53:39 -0700 (PDT) Received: by wizk4 with SMTP id k4so187553234wiz.1 for ; Wed, 22 Apr 2015 10:53:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=ZqV4n0Pn9o+6GS4w5AtsiVbFUtidqASwtPabfBkbZhs=; b=rKYAzTB12MoJxFadUh6DbD9M9lrsiWC6hRXvfXCW31X/hHtuLvbymnkUKkjLGhbe58 4SfRcU2zK1tWJLHZNEcqJddli5qhSqGP/1LPks8VbEjtTbStYFEA/AxmU/H//hxqKHMR yYhp3p4/VpH+tfq7xs7oNy2kQP+omhQfqZL9ZFq1bN+m+QmUifJ+KswRp/lSAHMZUjBW zM97A50Ly7Xfe2hVs2c4EFRT/ujZBiT/+gHVBzEzOUYidwMO7FD4dQEFAt3xviO/eoYO QZA/sqeG/LSlhY7Y8zL+GmXm+sSpJt1+Q/rvy5wIkZUT1CtGO/bsVFavhZMn3LoEQ3eZ 2/6A== MIME-Version: 1.0 X-Received: by 10.180.78.65 with SMTP id z1mr8171170wiw.14.1429725218040; Wed, 22 Apr 2015 10:53:38 -0700 (PDT) Sender: gettysjim@gmail.com Received: by 10.194.118.166 with HTTP; Wed, 22 Apr 2015 10:53:37 -0700 (PDT) In-Reply-To: References: Date: Wed, 22 Apr 2015 10:53:37 -0700 X-Google-Sender-Auth: Jp0Z1F968tnYbQkrUF8SnP4iRAU Message-ID: From: Jim Gettys To: "Bill Ver Steeg (versteb)" Content-Type: multipart/alternative; boundary=f46d043be13cbf70bb051453d784 Cc: bloat Subject: Re: [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Apr 2015 17:54:08 -0000 --f46d043be13cbf70bb051453d784 Content-Type: text/plain; charset=UTF-8 Actually, fq_codel's sparse flow optimization provides a pretty strong incentive for pacing traffic. If your TCP traffic is well paced, and is running at a rate below that of the bottleneck, then it will not build a queue. It will then be recognized as a "good guy" flow, and scheduled preferentially against other TCP flows that do build a queue (which is what happens today, with TCP's without pacing). - Jim On Wed, Apr 22, 2015 at 10:05 AM, Bill Ver Steeg (versteb) < versteb@cisco.com> wrote: > Actually, pacing does provide SOME benefit even when there are AQM schemes > in place. Sure, the primary benefit of pacing is in the presence of legacy > buffer management algorithms, but there are some additional benefits when > used in conjunction with a modern queue management scheme. > > Let's conduct a mind experiment with two rate adaptive flows competing at > a bottleneck neck on a very simple one hop network with very short RTT. The > results map into your topology of choice. As in most rate adaptive > protocols, there are N rates available to the receiver. The receiver > fetches a chunk of data, observes the receive rate and decides the > resolution of the next chunk of data that it fetches. It up shifts and down > shifts based on perceived network conditions. > > > First, the unpaced version > On the idle link, let's start both flows - with a small offset in time > between them. The first flow will HTTP get a small chunk of data at a very > low resolution, and will get the data at the line rate. The second one will > then get a small chunk of data at a very low resolution, and will also get > the data at the line rate. What will each adaptive bitrate algorithm think > the available BW is? They will both think they can run at the line > rate...... They both decide to upshift to a higher rate. Maybe they go to > the next resolution up, maybe they go to just below the line rate. Rinse > and repeat at various rates, with the square wave receiving patterns > overlapping anywhere from 0% to 100%. When the flows do not 100% overlap in > time, each client overestimates the link capacity. So, the apparent rate > (as seen by the adaptation logic on the client) is misleading. > > Now the paced version > On the idle link, let's start both flows- with a small offset in time > between them. The first flow gets a small chunk of data at a very low > resolution, but the data is paced at certain rate (could be the next higher > rate in the manifest file for this asset, could be the highest rate in the > manifest file, could be 1.x times one of these rates, could be some other > rate....). The second one will then get a small chunk of data at a very low > resolution, also at a deterministic paced rate. The data will arrive at > the client at either the rate they requested or slower. The flows tend to > be dispersed in time, and "defend their turf" a bit. The likelihood of a > gross mis-estimation of spare link capacity is decreased. > > Note that this is definitely a second-order effect, and one has to think > about how such flows compete against the more aggressive "send as fast as > you can" algorithms. It seems that there would be a control theory approach > to solving this problem. Hopefully somebody with better math skills then me > could quantify the effects. > > There is a bit of game theory mixed into the technical analysis. I have > been playing around with this a bit, and it is (at the very least) quite > interesting.... > > > Bill Ver Steeg > DISTINGUISHED ENGINEER > versteb@cisco.com > > > > > > > > > > > > -----Original Message----- > From: bloat-bounces@lists.bufferbloat.net [mailto: > bloat-bounces@lists.bufferbloat.net] On Behalf Of Steinar H. Gunderson > Sent: Wednesday, April 22, 2015 12:00 PM > To: luca.muscariello@orange.com > Cc: bloat > Subject: Re: [Bloat] RE : DSLReports Speed Test has latency measurement > built-in > > On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com > wrote: > > BTW if a paced flow from Google shares a bloated buffer with a non > > paced flow from a non Google server, doesn't this turn out to be a > > performance penalty for the paced flow? > > Nope. The paced flow puts less strain on the buffer (and hooray for that), > which is a win no matter if the buffer is contended or not. > > > fq_codel gives incentives to do pacing but if it's not deployed what's > > the performance gain of using pacing? > > fq_codel doesn't give any specific incentive to do pacing. In fact, if > absolutely all devices on your path would use fq_codel and have adequate > buffers, I believe pacing would be largely a no-op. > > /* Steinar */ > -- > Homepage: http://www.sesse.net/ > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > --f46d043be13cbf70bb051453d784 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Act= ually, fq_codel's sparse flow optimization provides a pretty strong inc= entive for pacing traffic.

If your TCP traffic is well paced, and is running at a rate below that of= the bottleneck, then it will not build a queue.

It will then be recognized as a "good guy"= ; flow, and scheduled preferentially against other TCP flows that do build = a queue (which is what happens today, with TCP's without pacing).
=
=C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 - Jim

On = Wed, Apr 22, 2015 at 10:05 AM, Bill Ver Steeg (versteb) &= lt;versteb@cisco.com= > wrote:
Actually, pacing d= oes provide SOME benefit even when there are AQM schemes in place. Sure, th= e primary benefit of pacing is in the presence of legacy buffer management = algorithms, but there are some additional benefits when used in conjunction= with a modern queue management scheme.

Let's conduct a mind experiment with two rate adaptive flows competing = at a bottleneck neck on a very simple one hop network with very short RTT. = The results map into your topology of choice. As in most rate adaptive prot= ocols, there are N rates available to the receiver. The receiver fetches a = chunk of data, observes the receive rate and decides the resolution of the = next chunk of data that it fetches. It up shifts and down shifts based on p= erceived network conditions.


First, the unpaced version
On the idle link, let's start both flows - with a small offset in time = between them. The first flow will HTTP get a small chunk of data at a very = low resolution, and will get the data at the line rate. The second one will= then get a small chunk of data at a very low resolution, and will also get= the data at the line rate. What will each adaptive bitrate algorithm think= the available BW is? They will both think they can run at the line rate...= ... They both decide to upshift to a higher rate. Maybe they go to the next= resolution up, maybe they go to just below the line rate. Rinse and repeat= at various rates, with the square wave receiving patterns overlapping anyw= here from 0% to 100%. When the flows do not 100% overlap in time, each clie= nt overestimates the link capacity. So,=C2=A0 the apparent rate (as seen by= the adaptation logic on the client) is misleading.

Now the paced version
On the idle link, let's start both flows- with a small offset in time b= etween them. The first flow gets a small chunk of data at a very low resolu= tion, but the data is paced at certain rate (could be the next higher rate = in the manifest file for this asset, could be the highest rate in the manif= est file, could be 1.x times one of these rates, could be some other rate..= ..). The second one will then get a small chunk of data at a very low resol= ution, also=C2=A0 at a deterministic paced rate.=C2=A0 The data will arrive= at the client at either the rate they requested or slower. The flows tend = to be dispersed in time, and "defend their turf" a bit. The likel= ihood of a gross mis-estimation of spare link capacity is decreased.

Note that this is definitely a second-order effect, and one has to think ab= out how such flows compete against the more aggressive "send as fast a= s you can" algorithms. It seems that there would be a control theory a= pproach to solving this problem. Hopefully somebody with better math skills= then me could quantify the effects.

There is a bit of game theory mixed into the technical analysis. I have bee= n playing around with this a bit, and it is (at the very least) quite inter= esting....


Bill Ver Steeg
DISTINGUISHED ENGINEER
versteb@cisco.com











-----Original Message-----
From: bloat-bounces@= lists.bufferbloat.net [mailto:bloat-bounces@lists.bufferbloat.net] On Behalf Of Steinar= H. Gunderson
Sent: Wednesday, April 22, 2015 12:00 PM
To: luca.muscariello@orange.= com
Cc: bloat
Subject: Re: [Bloat] RE : DSLReports Speed Test has latency measurement bui= lt-in

On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com wrote:
> BTW if a paced flow from Google shares a bloated buffer with a non
> paced flow from a non Google server,=C2=A0 doesn't this turn out t= o be a
> performance penalty for the paced flow?

Nope. The paced flow puts less strain on the buffer (and hooray for that), = which is a win no matter if the buffer is contended or not.

> fq_codel gives incentives to do pacing but if it's not deployed wh= at's
> the performance gain of using pacing?

fq_codel doesn't give any specific incentive to do pacing. In fact, if = absolutely all devices on your path would use fq_codel and have adequate bu= ffers, I believe pacing would be largely a no-op.

/* Steinar */
--
Homepage: http://www.se= sse.net/
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<= /a>
= https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<= /a>
= https://lists.bufferbloat.net/listinfo/bloat

--f46d043be13cbf70bb051453d784--