[Bloat] setting queue depth on tail drop configurations of pfifo_fast

David Lang david at lang.hm
Fri Mar 27 18:46:04 EDT 2015


On Fri, 27 Mar 2015, Bill Ver Steeg (versteb) wrote:

> For this very specific test, I am doing one-way netperf-wrapper packet tests 
> that will (almost) always be sending 1500 byte packets. I am then running some 
> ABR traffic cross traffic to see how it responds to FQ_AQM and AQM (where AQM 
> == Codel and PIE). I am using the pfifo_fast as a baseline. The Codel, 
> FQ_codel, PIE and FQ_PIE stuff is working fine. I need to tweak the pfifo_fast 
> queue length to do some comparisons.
>
> One of the test scenarios is a 3 Mbps ABR video flow on a 4 Mbps link, with 
> and without cross traffic. I have already done what you suggested, and the ABR 
> traffic drives the pfifo_fast code into severe congestion (even with no cross 
> traffic), with a 3 second bloat. This is a bit surprising until you think 
> about how the ABR code fills its video buffer at startup and then during 
> steady state playout. I will send a detailed note once I get a chance to write 
> it up properly.
>
> I would like to reduce the tail drop queue size to 100 packets (down from the 
> default of 1000) and see how that impacts the test. 3 seconds of bloat is 
> pretty bad, and I would like to compare how ABR works at at 1 second and at 
> 200-300 ms.

I think the real question is what are you trying to find out?

No matter how you fiddle with the queue size, we know it's not going to work 
well. Without using BQL, if you have a queue short enough to not cause horrific 
bloat when under load with large packets, it's not going to be long enough to 
keep the link busy with small packets.

If you are trying to do A/B comparisons to show that this doesn't work, that's 
one thing (and it sounds like you have already done so). But if you are trying 
to make fixed size buffers work well, we don't think that it can be done (not 
just that we have better ideas now, but the 'been there, tried that, nothing 
worked' side of things)

Even with 100 packet queue lengths you can easily get bad latencies under load.


re-reading your post for the umpteenth time, here's what I think I may be 
seeing.

you are working on developing video streaming software that can adapt the bit 
rate of the streaming video to have it fit within the available bandwidth. You 
are trying to see how this interacts with the different queuing options.

Is this a good summary?


If so, then you are basically wanting to do the same thing that the TCP stack is 
doing and when you see a dropped packet or ECN tagged packet, slow down the bit 
rate of the media that you are streaming so that it will use less bandwidth.

This sounds like an extremely interesting thing to do, it will be interesting to 
see the response from folks who know the deeper levels of the OS as to what 
options you have to learn that such events have taken place.

David Lang



More information about the Bloat mailing list