From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bifrost.lang.hm (mail.lang.hm [64.81.33.126]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 62EE821F41D for ; Fri, 27 Mar 2015 15:02:12 -0700 (PDT) Received: from asgard.lang.hm (asgard.lang.hm [10.0.0.100]) by bifrost.lang.hm (8.13.4/8.13.4/Debian-3) with ESMTP id t2RM27kX031154; Fri, 27 Mar 2015 14:02:07 -0800 Date: Fri, 27 Mar 2015 15:02:07 -0700 (PDT) From: David Lang X-X-Sender: dlang@asgard.lang.hm To: "Bill Ver Steeg (versteb)" In-Reply-To: Message-ID: References: User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: MULTIPART/Mixed; BOUNDARY="===============5761693728104978914==" Cc: "bloat@lists.bufferbloat.net" Subject: Re: [Bloat] setting queue depth on tail drop configurations of pfifo_fast X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Mar 2015 22:02:41 -0000 This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --===============5761693728104978914== Content-Type: TEXT/Plain; format=flowed; charset=US-ASCII BQL and HTB are not really comparible things. all the BQL does is to change the definition of the length of a buffer from X packets to X bytes. using your example, 1000 packets of 1500 bytes is 1.5MB or 120ms at 100Mb. But if you aren't transmitting 1500 byte packets, and are transmitting 75 byte packets instead, it's only 6ms worth of buffering. The bottom line is that sizing buffers by packets doesn't work. HTB creates virtual network interfaces that chop up the available bandwidth of the underlying device. I believe that if the underlying device supports BQL, HTB is working on byte length allocations, not packet counts. fq_codel doesn't have fixed buffer sizes, it takes a completely different approach that works much better in practice. The document that you found is actually out of date. Rather than trying to tune each thing for optimum performance and then measureing things, just benchmark the stock, untuned setup that you have and the simple fq_codel version without any tweaks and see if that does what you want. You can then work on tweaking things from there, but the improvements will be minor compared to doing the switch in the first place. A good tool for seeing the performance (throughput and latency) is netperf-wrapper. Set it up and just test the two configs. The RRUL test is especially good at showing the effects of the switch. David Lang On Fri, 27 Mar 2015, Bill Ver Steeg (versteb) wrote: > Date: Fri, 27 Mar 2015 21:45:11 +0000 > From: "Bill Ver Steeg (versteb)" > To: "bloat@lists.bufferbloat.net" > Subject: [Bloat] setting queue depth on tail drop configurations of pfifo_fast > > Bloaters- > > I am looking into how Adaptive Bitrate video algorithms interact with the > various queue management schemes. I have been using the netperf and netperf > wrapper tools, along with the macros to set the links states (thanks Toke and > Dave T). I am using HTB rather than BQL, which may have something to do with > the issues below. I am getting some interesting ABR results, which I will > share in detail with the group once I write them up. > > I need to set the transmit queue length of my Ubuntu ethernet path while > running tests against the legacy pfifo_fast (tail drop) algorithm. The > default value is 1000 packets, which boils down to 1.5 MBytes. At 100 Mbps, > this gives me a 120ms tail drop buffer, which is big, but somewhat reasonable. > When I then run tests at 10 Mbps, the buffer becomes a 1.2 second bloaty > buffer. When I run tests at 4 Mbps, the buffer becomes a 3 second extra-bloaty > buffer. This gives me some very distinct ABR results, which I am looking into > in some detail. I do want to try a few more delay values for tail drop at 4 > Mbps. > > https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel > says to set txqueuelen to the desired size, which makes sense. I have tried > several ways to do this on Ubuntu, with no glory. The way that seems it should > have worked was "ifconfig eth8 txqueuelen 100". When I then check the > txqueuelen using ifconfig, it looks correct. However, the delay measurements > still stay up near 3 seconds under load. When I check the queue depth using > "tc -s -d qdisc ls dev ifb_eth8", it shows the very large backlog in > pfifo_fast under load. > > So, has anybody recently changed the ethernet/HTB transmit packet queue size > for pfifo_fast in Ubuntu? If so, any pointers? I will also try to move over to > BQL and see if that works better than HTB...... I am not sure that my ethernet > drivers have BQL support though, as they complain when I try to load it as the > queue discipline. > > Thanks in advance > Bill VerSteeg > > --===============5761693728104978914== Content-Type: TEXT/PLAIN; CHARSET=us-ascii Content-ID: Content-Description: Content-Disposition: INLINE _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat --===============5761693728104978914==--