From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from alln-iport-8.cisco.com (alln-iport-8.cisco.com [173.37.142.95]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "alln-iport.cisco.com", Issuer "HydrantID SSL ICA G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 09B8E21F2F0 for ; Wed, 22 Apr 2015 10:05:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=4369; q=dns/txt; s=iport; t=1429722374; x=1430931974; h=from:to:cc:subject:date:message-id: content-transfer-encoding:mime-version; bh=mHZY3J7OIu4pYZH4J++6WBV9NLuyNdI6gtR08tCOAYs=; b=NcEFRcAUYsWzdgaaf5QcrKp7nHxrMWcRqq29UDuOHBf/sS6K/aU/Yfou 7GQrtH4Lr3Ry4u9JABCRxy21B52KkGbezMRhIglVZT1IW5hh48hpGYtsd Wn0QPwnGS0Z3gvMo4sZ4aH6xkuhpoUbmaI04iDDE1+3z+c2y7Iv4DvIgV I=; X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A0AfBQCd0zdV/5RdJa1bgwxSXAXHUQqGBAKBODsRAQEBAQEBAYEKhCABAQEDAQEBATcyAggDBQcGARkEAQEBChQJLgEKFAkJAQQBDQUIFogFCA3MeQEBAQEBAQEBAQEBAQEBAQEBAQEBARePcBoxDYMRgRYFkTuEAYI5gUaDTJBBg04igh4ZgTxvgUSBAAEBAQ X-IronPort-AV: E=Sophos;i="5.11,624,1422921600"; d="scan'208";a="143484398" Received: from rcdn-core-12.cisco.com ([173.37.93.148]) by alln-iport-8.cisco.com with ESMTP; 22 Apr 2015 17:05:44 +0000 Received: from xhc-rcd-x03.cisco.com (xhc-rcd-x03.cisco.com [173.37.183.77]) by rcdn-core-12.cisco.com (8.14.5/8.14.5) with ESMTP id t3MH5iua027800 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Wed, 22 Apr 2015 17:05:44 GMT Received: from xmb-aln-x05.cisco.com ([169.254.11.175]) by xhc-rcd-x03.cisco.com ([173.37.183.77]) with mapi id 14.03.0195.001; Wed, 22 Apr 2015 12:05:44 -0500 From: "Bill Ver Steeg (versteb)" To: "Steinar H. Gunderson" , "luca.muscariello@orange.com" Thread-Topic: Pacing ---- was RE: [Bloat] RE : DSLReports Speed Test has latency measurement built-in Thread-Index: AdB9HqZgxGgPp9wMRdGZgv+87+3gNw== Date: Wed, 22 Apr 2015 17:05:43 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.117.75.36] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: bloat Subject: [Bloat] Pacing ---- was RE: RE : DSLReports Speed Test has latency measurement built-in X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Apr 2015 17:06:14 -0000 Actually, pacing does provide SOME benefit even when there are AQM schemes = in place. Sure, the primary benefit of pacing is in the presence of legacy = buffer management algorithms, but there are some additional benefits when u= sed in conjunction with a modern queue management scheme.=20 Let's conduct a mind experiment with two rate adaptive flows competing at a= bottleneck neck on a very simple one hop network with very short RTT. The = results map into your topology of choice. As in most rate adaptive protocol= s, there are N rates available to the receiver. The receiver fetches a chun= k of data, observes the receive rate and decides the resolution of the next= chunk of data that it fetches. It up shifts and down shifts based on perce= ived network conditions. First, the unpaced version On the idle link, let's start both flows - with a small offset in time betw= een them. The first flow will HTTP get a small chunk of data at a very low = resolution, and will get the data at the line rate. The second one will the= n get a small chunk of data at a very low resolution, and will also get the= data at the line rate. What will each adaptive bitrate algorithm think the= available BW is? They will both think they can run at the line rate...... = They both decide to upshift to a higher rate. Maybe they go to the next res= olution up, maybe they go to just below the line rate. Rinse and repeat at = various rates, with the square wave receiving patterns overlapping anywhere= from 0% to 100%. When the flows do not 100% overlap in time, each client o= verestimates the link capacity. So, the apparent rate (as seen by the adap= tation logic on the client) is misleading.=20 Now the paced version On the idle link, let's start both flows- with a small offset in time betwe= en them. The first flow gets a small chunk of data at a very low resolution= , but the data is paced at certain rate (could be the next higher rate in t= he manifest file for this asset, could be the highest rate in the manifest = file, could be 1.x times one of these rates, could be some other rate....).= The second one will then get a small chunk of data at a very low resolutio= n, also at a deterministic paced rate. The data will arrive at the client= at either the rate they requested or slower. The flows tend to be disperse= d in time, and "defend their turf" a bit. The likelihood of a gross mis-est= imation of spare link capacity is decreased. Note that this is definitely a second-order effect, and one has to think ab= out how such flows compete against the more aggressive "send as fast as you= can" algorithms. It seems that there would be a control theory approach to= solving this problem. Hopefully somebody with better math skills then me c= ould quantify the effects.=20 There is a bit of game theory mixed into the technical analysis. I have bee= n playing around with this a bit, and it is (at the very least) quite inter= esting....=20 Bill Ver Steeg DISTINGUISHED ENGINEER=20 versteb@cisco.com -----Original Message----- From: bloat-bounces@lists.bufferbloat.net [mailto:bloat-bounces@lists.buffe= rbloat.net] On Behalf Of Steinar H. Gunderson Sent: Wednesday, April 22, 2015 12:00 PM To: luca.muscariello@orange.com Cc: bloat Subject: Re: [Bloat] RE : DSLReports Speed Test has latency measurement bui= lt-in On Wed, Apr 22, 2015 at 03:26:27PM +0000, luca.muscariello@orange.com wrote= : > BTW if a paced flow from Google shares a bloated buffer with a non=20 > paced flow from a non Google server, doesn't this turn out to be a=20 > performance penalty for the paced flow? Nope. The paced flow puts less strain on the buffer (and hooray for that), = which is a win no matter if the buffer is contended or not. > fq_codel gives incentives to do pacing but if it's not deployed what's=20 > the performance gain of using pacing? fq_codel doesn't give any specific incentive to do pacing. In fact, if abso= lutely all devices on your path would use fq_codel and have adequate buffer= s, I believe pacing would be largely a no-op. /* Steinar */ -- Homepage: http://www.sesse.net/ _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat