From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-x22b.google.com (mail-wg0-x22b.google.com [IPv6:2a00:1450:400c:c00::22b]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 7E88E21F220 for ; Tue, 27 May 2014 19:12:34 -0700 (PDT) Received: by mail-wg0-f43.google.com with SMTP id l18so10427629wgh.14 for ; Tue, 27 May 2014 19:12:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=pcSwLsJtr0TTLGsaZL8Zo4XPCzAQvjZljUW7bXhb7QM=; b=aLbcZXLAcAEOs+XsI/uDAVGoOWvx7I23TUm2mZl0BQKSXJpq97KGt1LGR2AMEYJchv btI/W/BfWHb4tuNuUcB8eLWfML2hgiqHcHcEoErZLSWUqTTuwl0WaRuiq9oppva3oNFB PXI9kt9QQOGjXhhFKV+S43O5qacAzk2c71Faqr8OA5anrnruRVu9o0YjPtjGs6F3xpIi UYxxCeoXKXeS90IOOHj8n/sgqtzDYCtKWhWYI25MzFslfxEI8KrrCEKVFD5jvmWX21hw n6O2BTikrb26X20gRqo3b7iHmHpXocUe2F53PLkZe2ELKpm4fIn8Y1uL18oq6ZZM3rQ3 r0rA== MIME-Version: 1.0 X-Received: by 10.194.175.70 with SMTP id by6mr46750809wjc.3.1401243152613; Tue, 27 May 2014 19:12:32 -0700 (PDT) Received: by 10.216.207.82 with HTTP; Tue, 27 May 2014 19:12:32 -0700 (PDT) In-Reply-To: References: <1401048053.664331760@apps.rackspace.com> <1401079740.21369945@apps.rackspace.com> <1401112879.32226492@apps.rackspace.com> Date: Tue, 27 May 2014 19:12:32 -0700 Message-ID: From: Dave Taht To: David Lang Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: "cerowrt-devel@lists.bufferbloat.net" Subject: Re: [Cerowrt-devel] Ubiquiti QOS X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 28 May 2014 02:12:35 -0000 On Tue, May 27, 2014 at 4:27 PM, David Lang wrote: > On Tue, 27 May 2014, Dave Taht wrote: > >> There is a phrase in this thread that is begging to bother me. >> >> "Throughput". Everyone assumes that throughput is a big goal - and it >> certainly is - and latency is also a big goal - and it certainly is - >> but by specifying what you want from "throughput" as a compromise with >> latency is not the right thing... >> >> If what you want is actually "high speed in-order packet delivery" - >> say, for example a movie, >> or a video conference, youtube, or a video conference - excessive >> latency with high throughput, really, really makes in-order packet >> delivery at high speed tough. > > > the key word here is "excessive", that's why I said that for max throughp= ut > you want to buffer as much as your latency budget will allow you to. Again I'm trying to make a distinction between "throughput", and "packets delivered-in-order-to-the-user." (for-which-we-need-a-new-word-I think) The buffering should not be in-the-network, it can be in the application. Take our hypothetical video stream for example. I am 20ms RTT from netflix. If I artificially inflate that by adding 50ms of in-network buffering, that means a loss can take 120ms to recover from. If instead, I keep a 3*RTT buffer in my application, and expect that I have= 5ms worth of network-buffering, instead, I recover from a loss in 40ms. (please note, it's late, I might not have got the math entirely right) As physical RTTs grow shorter, the advantages of smaller buffers grow large= r. You don't need 50ms queueing delay on a 100us path. Many applications buffer for seconds due to needing to be at least 2*(actual buffering+RTT) on the path. > >> You eventually lose a packet, and you have to wait a really long time >> until a replacement arrives. Stuart and I showed that at last ietf. >> And you get the classic "buffering" song playing.... > > > Yep, and if you buffer too much, your "lost packet" is actually still in > flight and eating bandwidth. > > David Lang > > >> low latency makes recovery from a loss in an in-order stream much, much >> faster. >> >> Honestly, for most applications on the web, what you want is high >> speed in-order packet delivery, not >> "bulk throughput". There is a whole class of apps (bittorrent, file >> transfer) that don't need that, and we >> have protocols for those.... >> >> >> >> On Tue, May 27, 2014 at 2:19 PM, David Lang wrote: >>> >>> the problem is that paths change, they mix traffic from streams, and in >>> other ways the utilization of the links can change radically in a short >>> amount of time. >>> >>> If you try to limit things to exactly the ballistic throughput, you are >>> not >>> going to be able to exactly maintain this state, you are either going t= o >>> overshoot (too much traffic, requiring dropping packets to maintain you= r >>> minimal buffer), or you are going to undershoot (too little traffic and >>> your >>> connection is idle) >>> >>> Since you can't predict all the competing traffic throughout the >>> Internet, >>> if you want to maximize throughput, you want to buffer as much as you c= an >>> tolerate for latency reasons. For most apps, this is more than enough t= o >>> cause problems for other connections. >>> >>> David Lang >>> >>> >>> On Mon, 26 May 2014, David P. Reed wrote: >>> >>>> Codel and PIE are excellent first steps... but I don't think they are >>>> the >>>> best eventual approach. I want to see them deployed ASAP in CMTS' s a= nd >>>> server load balancing networks... it would be a disaster to not deploy >>>> the >>>> far better option we have today immediately at the point of most >>>> leverage. >>>> The best is the enemy of the good. >>>> >>>> But, the community needs to learn once and for all that throughput and >>>> latency do not trade off. We can in principle get far better latency >>>> while >>>> maintaining high throughput.... and we need to start thinking about >>>> that. >>>> That means that the framing of the issue as AQM is counterproductive. >>>> >>>> On May 26, 2014, Mikael Abrahamsson wrote: >>>>> >>>>> >>>>> On Mon, 26 May 2014, dpreed@reed.com wrote: >>>>> >>>>>> I would look to queue minimization rather than "queue management" >>>>> >>>>> >>>>> (which >>>>>> >>>>>> >>>>>> implied queues are often long) as a goal, and think harder about the >>>>>> end-to-end problem of minimizing total end-to-end queueing delay >>>>> >>>>> >>>>> while >>>>>> >>>>>> >>>>>> maximizing throughput. >>>>> >>>>> >>>>> >>>>> As far as I can tell, this is exactly what CODEL and PIE tries to do. >>>>> They >>>>> try to find a decent tradeoff between having queues to make sure the >>>>> pipe >>>>> is filled, and not making these queues big enough to seriously affect >>>>> interactive performance. >>>>> >>>>> The latter part looks like what LEDBAT does? >>>>> >>>>> >>>>> Or are you thinking about something else? >>>> >>>> >>>> >>>> -- Sent from my Android device with K-@ Mail. Please excuse my brevity= . >>> >>> >>> >>> _______________________________________________ >>> Cerowrt-devel mailing list >>> Cerowrt-devel@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>> >>> _______________________________________________ >>> Cerowrt-devel mailing list >>> Cerowrt-devel@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>> >> >> >> >> > --=20 Dave T=C3=A4ht NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_= indecent.article