[Bloat] setting queue depth on tail drop configurations of pfifo_fast

David Lang david at lang.hm
Fri Mar 27 18:02:07 EDT 2015

BQL and HTB are not really comparible things.

all the BQL does is to change the definition of the length of a buffer from X 
packets to X bytes.

using your example, 1000 packets of 1500 bytes is 1.5MB or 120ms at 100Mb. But 
if you aren't transmitting 1500 byte packets, and are transmitting 75 byte 
packets instead, it's only 6ms worth of buffering.

The bottom line is that sizing buffers by packets doesn't work.

HTB creates virtual network interfaces that chop up the available bandwidth of 
the underlying device. I believe that if the underlying device supports BQL, HTB 
is working on byte length allocations, not packet counts.

fq_codel doesn't have fixed buffer sizes, it takes a completely different 
approach that works much better in practice.

The document that you found is actually out of date. Rather than trying to tune 
each thing for optimum performance and then measureing things, just benchmark 
the stock, untuned setup that you have and the simple fq_codel version without 
any tweaks and see if that does what you want. You can then work on tweaking 
things from there, but the improvements will be minor compared to doing the 
switch in the first place.

A good tool for seeing the performance (throughput and latency) is 
netperf-wrapper. Set it up and just test the two configs. The RRUL test is 
especially good at showing the effects of the switch.

David Lang

On Fri, 27 Mar 2015, Bill Ver Steeg (versteb) wrote:

> Date: Fri, 27 Mar 2015 21:45:11 +0000
> From: "Bill Ver Steeg (versteb)" <versteb at cisco.com>
> To: "bloat at lists.bufferbloat.net" <bloat at lists.bufferbloat.net>
> Subject: [Bloat] setting queue depth on tail drop configurations of	pfifo_fast
> Bloaters-
> I am looking into how Adaptive Bitrate video algorithms interact with the 
> various queue management schemes. I have been using the netperf and netperf 
> wrapper tools, along with the macros to set the links states (thanks Toke and 
> Dave T). I am using HTB rather than BQL, which may have something to do with 
> the issues below. I am getting some interesting ABR results, which I will 
> share in detail with the group once I write them up.
> I need to set the transmit queue length of my Ubuntu ethernet path while 
> running tests against the legacy pfifo_fast (tail drop) algorithm.  The 
> default value is 1000 packets, which boils down to 1.5 MBytes. At 100 Mbps, 
> this gives me a 120ms tail drop buffer, which is big, but somewhat reasonable. 
> When I then run tests at 10 Mbps, the buffer becomes a 1.2 second bloaty 
> buffer. When I run tests at 4 Mbps, the buffer becomes a 3 second extra-bloaty 
> buffer. This gives me some very distinct ABR results, which I am looking into 
> in some detail. I do want to try a few more delay values for tail drop at 4 
> Mbps.
> https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel 
> says to set txqueuelen to the desired size, which makes sense. I have tried 
> several ways to do this on Ubuntu, with no glory. The way that seems it should 
> have worked was "ifconfig eth8 txqueuelen 100". When I then check the 
> txqueuelen using ifconfig, it looks correct. However, the delay measurements 
> still stay up near 3 seconds under load. When I check the queue depth using 
> "tc -s -d qdisc ls dev ifb_eth8", it shows the very large backlog in 
> pfifo_fast under load.
> So, has anybody recently changed the ethernet/HTB transmit packet queue size 
> for pfifo_fast in Ubuntu? If so, any pointers? I will also try to move over to 
> BQL and see if that works better than HTB...... I am not sure that my ethernet 
> drivers have BQL support though, as they complain when I try to load it as the 
> queue discipline.
> Thanks in advance
> Bill VerSteeg
-------------- next part --------------
Bloat mailing list
Bloat at lists.bufferbloat.net

More information about the Bloat mailing list