From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vc0-x233.google.com (mail-vc0-x233.google.com [IPv6:2607:f8b0:400c:c03::233]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id F224E21F260 for ; Thu, 26 Feb 2015 05:56:15 -0800 (PST) Received: by mail-vc0-f179.google.com with SMTP id hy4so3933644vcb.10 for ; Thu, 26 Feb 2015 05:56:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=hP/qu9091xgPrP//0dCQx84lno1GDIqWeZD9XENQbxA=; b=iDfnuOC0dDGnBcjrDrlBay6fPbg5RWFMLAHy7zpliWlUyavs88lUU8Ei6uszT/1jxG 50tvfMRcFNKhrP2f5jkB0eLr6NypvGD6LtyWF0BWgWi0Dc/pN29t+oe9zNsy7qPSqPtG bVYgyvkxRr67P1wOzpCTjH10lDDb2NTXgYOhbmmk5i3EXl/nonI6N51k1g1uxtZ+WcyX uhNpnLv366GXS8Sk7WG7hN4Wlm1XLUWW7zP4pmlCP4P4CaLAHsgEVKApCUEO0Xrcb90v ccz+22qmp/ETnQQUznxub/x/gq06Dhyp0GeqRyEJwd7O45Sxz3C7lt6SzvNR7ohHMDlU i3sw== MIME-Version: 1.0 X-Received: by 10.52.181.33 with SMTP id dt1mr8815153vdc.42.1424958974669; Thu, 26 Feb 2015 05:56:14 -0800 (PST) Received: by 10.52.24.79 with HTTP; Thu, 26 Feb 2015 05:56:14 -0800 (PST) Received: by 10.52.24.79 with HTTP; Thu, 26 Feb 2015 05:56:14 -0800 (PST) In-Reply-To: References: Date: Thu, 26 Feb 2015 15:56:14 +0200 Message-ID: From: Jonathan Morton To: sahil grover Content-Type: multipart/alternative; boundary=bcaec548a2f181390b050ffe1def Cc: codel@lists.bufferbloat.net Subject: Re: [Codel] why RED is not considered as a solution to bufferbloat. X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 Feb 2015 13:56:44 -0000 --bcaec548a2f181390b050ffe1def Content-Type: text/plain; charset=UTF-8 Okay, let me walk you through this. Let's say you are managing a fairly slow link, on the order of 1 megabit. The receiver on the far side of it is running a modern OS and has opened its receive window to a whole megabyte. It would take about 10 seconds to pass a whole receive window through the link. This is a common situation in the real world, by the way. Now, let's suppose that the sender was constrained to a slower send rate, say half a megabit, for reasons unrelated to the network. Maybe it's a streaming video service on a mostly static image, so it only needs to send small delta frames and compressed audio. It has therefore opened the congestion window as wide as it will go, to the limit of the receive window. But then the action starts, there's a complete scene change, and a big burst of data is sent all at once, because the congestion window permits it. So the queue you're managing suddenly has a megabyte of data in it. Unless you do something about it, your induced latency just leapt up to ten seconds, and the VoIP call your user was on at the time will drop out. Now, let's compare the behaviour of two AQMs: Codel and something almost, but not entirely, unlike Codel. Specifically, it does everything that Codel does, but at enqueue instead of dequeue time. Both of them will let the entire burst into the queue. Codel only takes action if the queue remains more full than its threshold for more than 100ms. Since this burst arrives all at once, and the queue stayed nice and empty up until now, neither AQM will decide to mark or drop packets at that moment. Now, packets are slowly delivered across the link. Each one takes about 15ms to deliver. They are answered by acks, and the sender obligingly sends fresh packets to replace them. After about six or seven packets have been delivered, both AQMs will decide to start marking packets (this is an ECN enabled flow). Let's assume that the next packet after this point will be marked. This will be the receiver's first clue that congestion is occurring. For Codel, the next packet available to mark will be the eighth packet. So the receiver learns about the congestion event 120 ms after it began. It will echo the congestion mark back to the sender in the corresponding ack, which will immediately halve the congestion window. It will still take at least ten seconds to drain the queue. For NotCodel, the next available packet for marking is the one a megabyte later than the eighth packet. The receiver won't see that packet until 10120 ms after the congestion event began. So the sender will happily keep the queue ten seconds long for the next ten seconds, instead of backing off straight away. It will take at least TWENTY seconds to drain the queue. Note that even if the link bandwidth was 10 megabits rather than one, with the same receive window size, it would still take one second to drain the queue with Codel and two seconds with NotCodel. This relatively simple traffic situation demonstrates that marking on dequeue rather than enqueue is twice as effective at managing queue length. - Jonathan Morton --bcaec548a2f181390b050ffe1def Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

Okay, let me walk you through this.

Let's say you are managing a fairly slow link, on the or= der of 1 megabit. The receiver on the far side of it is running a modern OS= and has opened its receive window to a whole megabyte. It would take about= 10 seconds to pass a whole receive window through the link.

This is a common situation in the real world, by the way.

Now, let's suppose that the sender was constrained to a = slower send rate, say half a megabit, for reasons unrelated to the network.= Maybe it's a streaming video service on a mostly static image, so it o= nly needs to send small delta frames and compressed audio. It has therefore= opened the congestion window as wide as it will go, to the limit of the re= ceive window. But then the action starts, there's a complete scene chan= ge, and a big burst of data is sent all at once, because the congestion win= dow permits it.

So the queue you're managing suddenly has a megabyte of = data in it. Unless you do something about it, your induced latency just lea= pt up to ten seconds, and the VoIP call your user was on at the time will d= rop out.

Now, let's compare the behaviour of two AQMs: Codel and = something almost, but not entirely, unlike Codel. Specifically, it does eve= rything that Codel does, but at enqueue instead of dequeue time.

Both of them will let the entire burst into the queue. Codel= only takes action if the queue remains more full than its threshold for mo= re than 100ms. Since this burst arrives all at once, and the queue stayed n= ice and empty up until now, neither AQM will decide to mark or drop packets= at that moment.

Now, packets are slowly delivered across the link. Each one = takes about 15ms to deliver. They are answered by acks, and the sender obli= gingly sends fresh packets to replace them. After about six or seven packet= s have been delivered, both AQMs will decide to start marking packets (this= is an ECN enabled flow). Let's assume that the next packet after this = point will be marked. This will be the receiver's first clue that conge= stion is occurring.

For Codel, the next packet available to mark will be the eig= hth packet. So the receiver learns about the congestion event 120 ms after = it began. It will echo the congestion mark back to the sender in the corres= ponding ack, which will immediately halve the congestion window. It will st= ill take at least ten seconds to drain the queue.

For NotCodel, the next available packet for marking is the o= ne a megabyte later than the eighth packet. The receiver won't see that= packet until 10120 ms after the congestion event began. So the sender will= happily keep the queue ten seconds long for the next ten seconds, instead = of backing off straight away. It will take at least TWENTY seconds to drain= the queue.

Note that even if the link bandwidth was 10 megabits rather = than one, with the same receive window size, it would still take one second= to drain the queue with Codel and two seconds with NotCodel.

This relatively simple traffic situation demonstrates that m= arking on dequeue rather than enqueue is twice as effective at managing que= ue length.

- Jonathan Morton

--bcaec548a2f181390b050ffe1def--