[Codel] FQ_Codel lwn draft article review

Andrew McGregor andrewmcgr at gmail.com
Sat Nov 24 14:57:58 EST 2012


On 25/11/2012, at 5:19 AM, Dave Taht <dave.taht at gmail.com> wrote:

> On Sat, Nov 24, 2012 at 1:07 AM, Toke Høiland-Jørgensen <toke at toke.dk> wrote:
>> "Paul E. McKenney" <paulmck at linux.vnet.ibm.com> writes:
>> 
> 
> Indirectly observing the web load effects on that graph, while timing
> web page completion, would be good, when comparing pfifo_fast and
> various aqm variants.

Indeed

>>> Also, I know what ICMP is, but the UDP variants are new to me.  Could
>>> you please expand the "EF", "BK", "BE", and "CSS" acronyms?
>> 
>> The UDP ping times are simply roundtrips/second (as measured by netperf)
>> converted to ping times. The acronyms are diffserv markings, i.e.
>> EF=expedited forwarding, BK=bulk (CS1 marking), BE=best effort (no
>> marking).
> 
> The classification tests are in there for a number of reasons.
> 
> 0) I needed multiple streams in the test anyway.
> 
> 1) Many people keep insisting that classification can work. It
> doesn't. It never has. Not over the wild and wooly internet. It only
> rarely does any good at all even on internal networks. It sometimes
> works on some kinds of udp streams, but that's it. The bulk of the
> problem is the massive packet streams modern offloads generate, and
> breaking those up, everywhere possible, any time possible.
> 
> I had put up a graph last week, that showed each classification bucket
> for a tcp stream being totally ignored...
> 
> 2) Theoretically wireless 802.11e SHOULD respect classification. In
> fact, it does, on the ath9k, to a large extent. However, on the iwl I
> have, BE, BK traffic get completely starved by VO, and VI traffic,
> which is something of a bug. I'm certain that due to inadaquate
> testing, 802.11e classification is largely broken in the field, and
> I'd hoped this test would bring that out to more people.

802.11e doesn't prevent a station from starving itself, nor does it help the AP at all when there is contending traffic to deliver to the same station... all it does for you is prevent one station with high priority traffic to send/receive from getting completely starved by another station with low priority.  It's not at all a complete solution, and we need something like the mythical mfq_codel to sort out the rest.

> 3) I don't mind at an effort to make classification work, particularly
> for traffic clearly marked background, such as bittorrent often is.
> Perhaps this is an opportunity to get IPv6 done up right, as it seems
> the diffserv bits are much more rarely fiddled with in transit

Doesn't look that hard, to be honest.

>> The UDP ping tests tend to not work so well on a loaded link,
>> however, since netperf stops sending packets after detecting
>> (excessive(?)) loss. Which is why you see only see the UDP ping times on
>> the first part of the graph.
> 
> Netperf stops UDP_STREAM exchanges after the first lost udp packet.
> This is not helpful.
> 
> I keep noting that the next phase of the rrul development is to find a
> good pair of CIR one way measurements that look a bit like voip.
> Either that test can get added to netperf or we use another tool, or
> we create one, and I keep hoping for recommendations from various
> people on this list. Come on, something like this
> exists? Anybody?

nmap -PU?

>> I think what happens is that one of the streams (the turquoise
>> one) starts up faster than the other ones, consuming all the bandwidth
>> for the first couple of seconds until they adjust to the same level.
> 
> I'm not willing to draw this conclusion from this graph, and need
> to/would like someone else to/ setup a test in a controlled
> environment. the wrapper scripts
> can dump the raw data and I can manually plot using gnuplot or a
> spreadsheet, but it's tedious...

I may have some code that will help here, including CDFs and a rarely seen in the wild exponential weighted moving variance.

>> These initial values are then scaled off the graph as outlier values.
> 
> Huge need for cdf plots and to present the outliers. In fact I'd like
> graphs that just presented the outliers. Another way to approach it
> would be, instead of creating static graphs, to use something like the
> ds3.js and incorporate the ability to zoom
> in, around, and so on, on multiple data sets. Or leverage mlab's tools.
> 
> I am no better at javascript than python.

Run interactively, python matplotlib stuff lets you zoom.  I don't know if that can be made into a zoomable web page though.

Andrew


More information about the Codel mailing list