[Codel] R: Making tests on Tp-link router powered by Openwrt svn

Kathleen Nichols nichols at pollere.com
Fri Dec 21 12:51:17 EST 2012


Oh, yes, my sfqcodel tests in the simulator are all per-packet rounding
so performance
will vary.

On 12/21/12 9:43 AM, Dave Taht wrote:
> On Fri, Dec 21, 2012 at 6:06 PM, Kathleen Nichols <nichols at pollere.com> wrote:
>> On 12/21/12 2:32 AM, Dave Taht wrote:
>>> On Fri, Dec 21, 2012 at 5:19 AM, Alessandro Bolletta
>>> <alessandro at mediaspot.net> wrote:
> 
>> I don't understand why you are lowering the target explicitly as the use of
>> an MTU's worth of packets as the alternate target appeared to work quite
>> well at rates down to 64kbps in simulation as well as in changing rates.
>> I thought Van explained this nicely in his talk at IETF.
> 
> I call this the horizontal standing queue problem, and it's specific
> to *fq*_codel (and to some extent, the derivatives so far).
> 
> I figure codel, by itself, copes better.
> 
> but in fq_codel, each (of 1024 by default) queue is a full blown codel
> queue, and thus drops out of drop state when the mtu limit is hit.
> 
> Worse, fq_codel always sends a full quantum's worth of packets from
> each queue, in order, not mixing them. This is perfectly fine and even
> reasonably sane at truly high bandwidths where TSO/GSO and GRO are
> being used, but a pretty terrible idea at 100Mbit and below, and
> results in moving rapidly back and forth through the timestamps in the
> backlog on that queue...
> 
> The original fq_codel did no fate-sharing at all, the current one (in
> 3.6) does.
> 
> I produced (several months ago) a couple versions of (e and n)
> fq_codel that did better mixing at a higher cpu cost. It was really
> hard to see differences. I just found a bug in my nfq_codel patch
> today in fact...
> 
> Also fiddled a lot with the per queue don't shoot me at MTU limit, in
> one version shared amongst all queues, in another, set to the size of
> the biggest packet to have hit that queue since the end of the last
> drop interval.
> 
> So I then decided to really look at this, hard, by working on a set of
> benchmarks that made it easy to load up a link with a wide variety of
> actual traffic. I'm really happy with the rrul related tests toke has
> put together.
> 
> I tried to talk about this coherently then, and failed, so once I had
> good tests showing various behaviors, I figured I'd talk about it. I
> would still prefer not to talk about until I have results I can trust
> and finding all the sources of experimental error in the labs setup so
> far have eaten most of my time.
> 
> I didn't mean to jump the gun on that today, I have a few weeks left
> to go, and to collate the analysis coherently with the lab results,
> and for all I know some aspect of the lab implementations, the known
> BQL issue (bytes rather than packets), or the fact that HTB buffers up
> a packet, are all more key to the problem. If it's real.
> 
> In the benchmarks I've been doing via toke's implementation of the
> rrul test suite, *on a asymmetric link* fq_codel behavior gets more
> stable (uses up the largest percentage of bandwidth) if you set target
> above the size of the maximum size packet's delivery time at that
> bandwidth. Another benchmark that will show bad behavior regardless is
> to try pounding a line flat for a while with dozens of full rate
> streams.
> 
> in your sfq_codel (which I hope to look at next week), do you have a
> shared MTU limit for all streams or do you do it on a per stream
> basis?
> 
>>
>>         Kathie
>> _______________________________________________
>> Codel mailing list
>> Codel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/codel
> 
> 
> 




More information about the Codel mailing list