<p>It may be worth noting that fq-codel is not stochastic in it's fairness mechanism. SFQ suffers from the birthday effect because it hashes packets into buffers, which is what makes it stochastic. </p>
<p> - Jonathan Morton<br>
</p>
<div class="gmail_quote">On Nov 28, 2012 6:02 PM, "Paul E. McKenney" <<a href="mailto:paulmck@linux.vnet.ibm.com">paulmck@linux.vnet.ibm.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Dave gave me back the pen, so I looked to see what I had expanded<br>
FQ-CoDel to. The answer was... Nothing. Nothing at all.<br>
<br>
So I added a Quick Quiz as follows:<br>
<br>
Quick Quiz 2: What does the FQ-CoDel acronym expand to?<br>
<br>
Answer: There are some differences of opinion on this. The<br>
comment header in net/sched/sch_fq_codel.c says<br>
“Fair Queue CoDel” (presumably by analogy to SFQ's<br>
expansion of “Stochastic Fairness Queueing”), and<br>
“CoDel” is generally agreed to expand to “controlled<br>
delay”. However, some prefer “Flow Queue Controlled<br>
Delay” and still others prefer to prepend a silent and<br>
invisible "S", expanding to “Stochastic Flow Queue<br>
Controlled Delay” or “Smart Flow Queue Controlled<br>
Delay”. No doubt additional expansions will appear in<br>
the fullness of time.<br>
<br>
In the meantime, this article focuses on the concepts,<br>
implementation, and performance, leaving naming debates<br>
to others.<br>
<br>
This level snarkiness would go over reasonably well in an LWN article,<br>
I would -not- suggest this approach in an academic paper, just in case<br>
you were wondering. But if there is too much discomfort with snarking,<br>
I just might be convinced to take another approach.<br>
<br>
Thanx, Paul<br>
<br>
On Tue, Nov 27, 2012 at 08:38:38PM -0800, Paul E. McKenney wrote:<br>
> I guess I just have to be grateful that people mostly agree on the acronym,<br>
> regardless of the expansion.<br>
><br>
> Thanx, Paul<br>
><br>
> On Tue, Nov 27, 2012 at 07:43:56PM -0800, Kathleen Nichols wrote:<br>
> ><br>
> > It would be me that tries to say "stochastic flow queuing with CoDel"<br>
> > as I like to be accurate. But I think FQ-Codel is Flow queuing with CoDel.<br>
> > JimG suggests "smart flow queuing" because he is ever mindful of the<br>
> > big audience.<br>
> ><br>
> > On 11/27/12 4:27 PM, Paul E. McKenney wrote:<br>
> > > On Tue, Nov 27, 2012 at 04:53:34PM -0700, Greg White wrote:<br>
> > >> BTW, I've heard some use the term "stochastic flow queueing" as a<br>
> > >> replacement to avoid the term "fair". Seems like a more apt term anyway.<br>
> > ><br>
> > > Would that mean that FQ-CoDel is Flow Queue Controlled Delay? ;-)<br>
> > ><br>
> > > Thanx, Paul<br>
> > ><br>
> > >> -Greg<br>
> > >><br>
> > >><br>
> > >> On 11/27/12 3:49 PM, "Paul E. McKenney" <<a href="mailto:paulmck@linux.vnet.ibm.com">paulmck@linux.vnet.ibm.com</a>> wrote:<br>
> > >><br>
> > >>> Thank you for the review and comments, Jim! I will apply them when<br>
> > >>> I get the pen back from Dave. And yes, that is the thing about<br>
> > >>> "fairness" -- there are a great many definitions, many of the most<br>
> > >>> useful of which appear to many to be patently unfair. ;-)<br>
> > >>><br>
> > >>> As you suggest, it might well be best to drop discussion of fairness,<br>
> > >>> or to at the least supply the corresponding definition.<br>
> > >>><br>
> > >>> Thanx, Paul<br>
> > >>><br>
> > >>> On Tue, Nov 27, 2012 at 05:03:02PM -0500, Jim Gettys wrote:<br>
> > >>>> Some points worth making:<br>
> > >>>><br>
> > >>>> 1) It is important to point out that (and how) fq_codel avoids<br>
> > >>>> starvation:<br>
> > >>>> unpleasant as elephant flows are, it would be very unfriendly to never<br>
> > >>>> service them at all until they time out.<br>
> > >>>><br>
> > >>>> 2) "fairness" is not necessarily what we ultimately want at all; you'd<br>
> > >>>> really like to penalize those who induce congestion the most. But we<br>
> > >>>> don't<br>
> > >>>> currently have a solution (though Bob Briscoe at BT thinks he does, and<br>
> > >>>> is<br>
> > >>>> seeing if he can get it out from under a BT patent), so the current<br>
> > >>>> fq_codel round robins ultimately until/unless we can do something like<br>
> > >>>> Bob's idea. This is a local information only subset of the ideas he's<br>
> > >>>> been<br>
> > >>>> working on in the congestion exposure (conex) group at the IETF.<br>
> > >>>><br>
> > >>>> 3) "fairness" is always in the eyes of the beholder (and should be left<br>
> > >>>> to<br>
> > >>>> the beholder to determine). "fairness" depends on where in the network<br>
> > >>>> you<br>
> > >>>> are. While being "fair" among TCP flows is sensible default policy for<br>
> > >>>> a<br>
> > >>>> host, else where in the network it may not be/usually isn't.<br>
> > >>>><br>
> > >>>> Two examples:<br>
> > >>>> o at a home router, you probably want to be "fair" according to transmit<br>
> > >>>> opportunities. We really don't want a single system remote from the<br>
> > >>>> router<br>
> > >>>> to be able to starve the network so that devices near the router get<br>
> > >>>> much<br>
> > >>>> less bandwidth than you might hope/expect.<br>
> > >>>><br>
> > >>>> What is more, you probably want to account for a single host using many<br>
> > >>>> flows, and regulate that they not be able to "hog" bandwidth in the home<br>
> > >>>> environment, but only use their "fair" share.<br>
> > >>>><br>
> > >>>> o at an ISP, you must to be "fair" between customers; it is best to<br>
> > >>>> leave<br>
> > >>>> the judgement of "fairness" at finer granularity (e.g. host and TCP<br>
> > >>>> flows)<br>
> > >>>> to the points closer to the customer's systems, so that they can enforce<br>
> > >>>> whatever definition of "fair" they need to themselves.<br>
> > >>>><br>
> > >>>><br>
> > >>>> Algorithms like fq_codel can be/should be adjusted to the circumstances.<br>
> > >>>><br>
> > >>>> And therefore exactly what you choose to hash against to form the<br>
> > >>>> buckets<br>
> > >>>> will vary depending on where you are. That at least one step (at the<br>
> > >>>> user's device) of this be TCP flow "fair" does have the great advantage<br>
> > >>>> of<br>
> > >>>> helping the RTT unfairness problem that violates the principle of "least<br>
> > >>>> surprise", such as that routinely seen in places like New Zealand.<br>
> > >>>><br>
> > >>>> This is why I have so many problems using the word "fair" near this<br>
> > >>>> algorithm. "fair" is impossible to define, overloaded in people's mind<br>
> > >>>> with TCP fair queuing, not even desirable much of the time, and by<br>
> > >>>> definition and design, even today's fq_codel isn't fair to lots of<br>
> > >>>> things,<br>
> > >>>> and the same basic algorithm can/should be tweaked in lots of directions<br>
> > >>>> depending on what we need to do. Calling this "smart" queuing or some<br>
> > >>>> such<br>
> > >>>> would be better.<br>
> > >>>><br>
> > >>>> When you've done another round on the document, I'll do a more detailed<br>
> > >>>> read.<br>
> > >>>> - Jim<br>
> > >>>><br>
> > >>>><br>
> > >>>><br>
> > >>>><br>
> > >>>> On Fri, Nov 23, 2012 at 5:18 PM, Paul E. McKenney <<br>
> > >>>> <a href="mailto:paulmck@linux.vnet.ibm.com">paulmck@linux.vnet.ibm.com</a>> wrote:<br>
> > >>>><br>
> > >>>>> On Fri, Nov 23, 2012 at 09:57:34AM +0100, Dave Taht wrote:<br>
> > >>>>>> David Woodhouse and I fiddled a lot with adsl and openwrt and a<br>
> > >>>>>> variety of drivers and network layers in a typical bonded adsl stack<br>
> > >>>>>> yesterday. The complexity of it all makes my head hurt. I'm happy<br>
> > >>>> that<br>
> > >>>>>> a newly BQL'd ethernet driver (for the geos and qemu) emerged from<br>
> > >>>> it,<br>
> > >>>>>> which he submitted to netdev...<br>
> > >>>>><br>
> > >>>>> Cool!!! ;-)<br>
> > >>>>><br>
> > >>>>>> I made a recording of us last night discussing the layers, which I<br>
> > >>>>>> will produce and distribute later...<br>
> > >>>>>><br>
> > >>>>>> Anyway, along the way, we fiddled a lot with trying to analyze where<br>
> > >>>>>> the 350ms or so of added latency was coming from in the traverse<br>
> > >>>> geo's<br>
> > >>>>>> adsl implementation and overlying stack....<br>
> > >>>>>><br>
> > >>>>>> Plots: <a href="http://david.woodhou.se/dwmw2-netperf-plots.tar.gz" target="_blank">http://david.woodhou.se/dwmw2-netperf-plots.tar.gz</a><br>
> > >>>>>><br>
> > >>>>>> Note: 1:<br>
> > >>>>>><br>
> > >>>>>> The netperf sample rate on the rrul test needs to be higher than<br>
> > >>>>>> 100ms in order to get a decent result at sub 10Mbit speeds.<br>
> > >>>>>><br>
> > >>>>>> Note 2:<br>
> > >>>>>><br>
> > >>>>>> The two nicest graphs here are nofq.svg vs fq.svg, which were taken<br>
> > >>>> on<br>
> > >>>>>> a gigE link from a Mac running Linux to another gigE link. (in other<br>
> > >>>>>> words, NOT on the friggin adsl link) (firefox can display svg, I<br>
> > >>>> don't<br>
> > >>>>>> know what else) I find the T+10 delay before stream start in the<br>
> > >>>>>> fq.svg graph suspicious and think the "throw out the outlier" code<br>
> > >>>> in<br>
> > >>>>>> the netperf-wrapper code is at fault. Prior to that, codel is merely<br>
> > >>>>>> buffering up things madly, which can also be seen in the pfifo_fast<br>
> > >>>>>> behavior, with 1000pkts it's default.<br>
> > >>>>><br>
> > >>>>> I am using these two in a new "Effectiveness of FQ-CoDel" section.<br>
> > >>>>> Chrome can display .svg, and if it becomes a problem, I am sure that<br>
> > >>>>> they can be converted. Please let me know if some other data would<br>
> > >>>>> make the point better.<br>
> > >>>>><br>
> > >>>>> I am assuming that the colored throughput spikes are due to occasional<br>
> > >>>>> packet losses. Please let me know if this interpretation is overly<br>
> > >>>> naive.<br>
> > >>>>><br>
> > >>>>> Also, I know what ICMP is, but the UDP variants are new to me. Could<br>
> > >>>>> you please expand the "EF", "BK", "BE", and "CSS" acronyms?<br>
> > >>>>><br>
> > >>>>>> (Arguably, the default queue length in codel can be reduced from 10k<br>
> > >>>>>> packets to something more reasonable at GigE speeds)<br>
> > >>>>>><br>
> > >>>>>> (the indicator that it's the graph, not the reality, is that the<br>
> > >>>>>> fq.svg pings and udp start at T+5 and grow minimally, as is usual<br>
> > >>>> with<br>
> > >>>>>> fq_codel.)<br>
> > >>>>><br>
> > >>>>> All sessions were started at T+5, then?<br>
> > >>>>><br>
> > >>>>>> As for the *.ps graphs, well, they would take david's network<br>
> > >>>> topology<br>
> > >>>>>> to explain, and were conducted over a variety of circumstances,<br>
> > >>>>>> including wifi, with more variables in play than I care to think<br>
> > >>>>>> about.<br>
> > >>>>>><br>
> > >>>>>> We didn't really get anywhere on digging deeper. As we got to purer<br>
> > >>>>>> tests - with a minimal number of boxes, running pure ethernet,<br>
> > >>>>>> switched over a couple of switches, even in the simplest two box<br>
> > >>>> case,<br>
> > >>>>>> my HTB based "ceroshaper" implementation had multiple problems in<br>
> > >>>>>> cutting median latencies below 100ms, on this very slow ADSL link.<br>
> > >>>>>> David suspects problems on the path along the carrier backbone as a<br>
> > >>>>>> potential issue, and the only way to measure that is with two one<br>
> > >>>> way<br>
> > >>>>>> trip time measurements (rather than rtt), time synced via ntp... I<br>
> > >>>>>> keep hoping to find a rtp test, but I'm open to just about any<br>
> > >>>> option<br>
> > >>>>>> at this point. anyone?<br>
> > >>>>>><br>
> > >>>>>> We also found a probable bug in mtr in that multiple mtrs on the<br>
> > >>>> same<br>
> > >>>>>> box don't co-exist.<br>
> > >>>>><br>
> > >>>>> I must confess that I am not seeing all that clear a difference<br>
> > >>>> between<br>
> > >>>>> the behaviors of ceroshaper and FQ-CoDel. Maybe somewhat better<br>
> > >>>> latencies<br>
> > >>>>> for FQ-CoDel, but not unambiguously so.<br>
> > >>>>><br>
> > >>>>>> Moving back to more scientific clarity and simpler tests...<br>
> > >>>>>><br>
> > >>>>>> The two graphs, taken a few weeks back, on pages 5 and 6 of this:<br>
> > >>>>>><br>
> > >>>>>><br>
> > >>>>><br>
> > >>>> <a href="http://www.teklibre.com/~d/bloat/Not_every_packet_is_sacred-Battling_Buff" target="_blank">http://www.teklibre.com/~d/bloat/Not_every_packet_is_sacred-Battling_Buff</a><br>
> > >>>> erbloat_on_wifi.pdf<br>
> > >>>>>><br>
> > >>>>>> appear to show the advantage of fq_codel fq + codel + head drop over<br>
> > >>>>>> tail drop during the slow start period on a 10Mbit link - (see how<br>
> > >>>>>> squiggly slow start is on pfifo fast?) as well as the marvelous<br>
> > >>>>>> interstream latency that can be achieved with BQL=3000 (on a 10 mbit<br>
> > >>>>>> link.) Even that latency can be halved by reducing BQL to 1500,<br>
> > >>>> which<br>
> > >>>>>> is just fine on a 10mbit. Below those rates I'd like to be rid of<br>
> > >>>> BQL<br>
> > >>>>>> entirely, and just have a single packet outstanding... in everything<br>
> > >>>>>> from adsl to cable...<br>
> > >>>>>><br>
> > >>>>>> That said, I'd welcome other explanations of the squiggly slowstart<br>
> > >>>>>> pfifo_fast behavior before I put that explanation on the slide....<br>
> > >>>> ECN<br>
> > >>>>>> was in play here, too. I can redo this test easily, it's basically<br>
> > >>>>>> running a netperf TCP_RR for 70 seconds, and starting up a<br>
> > >>>> TCP_MAERTS<br>
> > >>>>>> and TCP_STREAM for 60 seconds a T+5, after hammering down on BQL's<br>
> > >>>>>> limit and the link speeds on two sides of a directly connected<br>
> > >>>> laptop<br>
> > >>>>>> connection.<br>
> > >>>>><br>
> > >>>>> I must defer to others on this one. I do note the much lower<br>
> > >>>> latencies<br>
> > >>>>> on slide 6 compared to slide 5, though.<br>
> > >>>>><br>
> > >>>>> Please see attached for update including .git directory.<br>
> > >>>>><br>
> > >>>>> Thanx, Paul<br>
> > >>>>><br>
> > >>>>>> ethtool -s eth0 advertise 0x002 # 10 Mbit<br>
> > >>>>>><br>
> > >>>>><br>
> > >>>>> _______________________________________________<br>
> > >>>>> Cerowrt-devel mailing list<br>
> > >>>>> <a href="mailto:Cerowrt-devel@lists.bufferbloat.net">Cerowrt-devel@lists.bufferbloat.net</a><br>
> > >>>>> <a href="https://lists.bufferbloat.net/listinfo/cerowrt-devel" target="_blank">https://lists.bufferbloat.net/listinfo/cerowrt-devel</a><br>
> > >>>>><br>
> > >>>>><br>
> > >>><br>
> > >>> _______________________________________________<br>
> > >>> Codel mailing list<br>
> > >>> <a href="mailto:Codel@lists.bufferbloat.net">Codel@lists.bufferbloat.net</a><br>
> > >>> <a href="https://lists.bufferbloat.net/listinfo/codel" target="_blank">https://lists.bufferbloat.net/listinfo/codel</a><br>
> > >><br>
> > ><br>
> > > _______________________________________________<br>
> > > Codel mailing list<br>
> > > <a href="mailto:Codel@lists.bufferbloat.net">Codel@lists.bufferbloat.net</a><br>
> > > <a href="https://lists.bufferbloat.net/listinfo/codel" target="_blank">https://lists.bufferbloat.net/listinfo/codel</a><br>
> > ><br>
> ><br>
<br>
_______________________________________________<br>
Codel mailing list<br>
<a href="mailto:Codel@lists.bufferbloat.net">Codel@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/codel" target="_blank">https://lists.bufferbloat.net/listinfo/codel</a><br>
</blockquote></div>