[Codel] [Bloat] Latest codel, fq_codel, and pie sim study from cablelabs now available
Dave Taht
dave.taht at gmail.com
Wed May 8 18:25:52 EDT 2013
Eric, you supplied the prio scheduler example as an example of how it
shouldn't work, right?
On Tue, May 7, 2013 at 1:30 PM, Eric Dumazet <eric.dumazet at gmail.com> wrote:
> On Tue, 2013-05-07 at 14:56 -0500, Wes Felter wrote:
>
> > Is it time for prio_fq_codel or wfq_codel? That's what comes to mind
> > when seeing the BitTorrent vs. VoIP results.
>
>
0) If evenly distributed, packet loss as high as reported here wouldn't
actually affect VOIP much.
I'd really like to have a better grip on the packet size, timing and other
behavior of new codecs, notably VP8. Anyone?
One thing that fq_codel enables is silence suppression actually can work,
so there is no need to send VOIP as CBR, and most speech is silence.
1) All of the torrent clients I've looked at come, by default, with an
upload rate limiter set to a very low value in the 50-150k range, so that
the upload component of torrent problem has already been *solved via market
demand*.
I would certainly like those running windows and mac to let us know what
the defaults are on a variety of torrent these days... If everybody here
could just download one randomly
http://en.wikipedia.org/wiki/Comparison_of_BitTorrent_clients
and try it on distributing some popular, but legal data,
http://www.gutenberg.org/wiki/Gutenberg:The_CD_and_DVD_Project
through their existing gateways and/or through cerowrt/openwrt?
we'd be able to update here, and wikipedia with some info on that.
A lot of people have been running scared of torrent for a long time, and my
limited experience with current clients indicates strongly that it's not a
huge problem anymore. I tend to see torrent packets marked CS1 too, which
is either an artefact of all the DPI people ended up doing on it, or the
clients co-operating....
Also LEDBAT and uTP seem to behave very differently, and I have some doubts
about the quality of the TCP-ledbat model used in this study.
2) I have fiddled with various forms of wfq (qfq, some of my own stuff,
prioritization, and currently a toy model with a very limited number of
queues with a very well defined set of prioritizations)
3) It seems possible to make fq_codel's drop strategy a little more
aggressive when there are high numbers of flows in multiple ways.
The horizontal standing queue problem in fq_codel is bothersome and has
been discussed on this list several times since day one. It comes from the
original codel single queue "maxpacket" concept where dropping from a queue
with a single packet in it is a bad idea - among other things it might
stomp on packets in RTO or FIN, etc, etc. (note that maxpacket itself is an
artifact of the ns2 code - it may not be necessary to keep a MTU's worth of
packets around, merely a single packet might be fine)
I long ago created a version of codel (codel3.h) that ripped out all single
queue assumptions from the codel code... in preparation for finding
something that worked better when there were multiple queues in play.
I have a "special" version of fq_codel (efq_codel) in cerowrt where I have
fiddled with various tweaks and tests and options. I've been pretty good
about not documenting this until now, because everything I've tried to date
was not much of an improvement, (nor did I have the robust set of test and
edge cases I do now)
My most recent idea for increasing fq_codel drop behavior under load
involves relaxing the exit-the-drop-scheduler-at-maxpacket strategy when it
would otherwise drop, and the measured delay in that fq_codel queue exceeds
the current interval * X, maybe with an added qualification of packet size
> 1000.
It's hacky but I've repeatedly proven to myself that the most trivial of
changes like this can have enormous side effects... and this is targetted
specifically at the behavior of torrents, as best as I understand it.
> Sure !
>
> eth=eth0
> tc qdisc del dev $eth root 2>/dev/null
> tc -batch << EOF
> qdisc add dev $eth root handle 1: prio bands 3
> qdisc add dev $eth parent 1:1 handle 11: fq_codel
> qdisc add dev $eth parent 1:2 handle 12: fq_codel
> qdisc add dev $eth parent 1:3 handle 13: fq_codel
> EOF
>
>
Heh. I am hoping you are providing this as a negative proof!? as the strict
prioritization of this particular linux scheduler means that a single full
rate TCP flow in class 1:1 will completely starve classes 1:2 and 1:3.
..snip snip..
static struct sk_buff *prio_dequeue(struct Qdisc *sch)
{
struct prio_sched_data *q = qdisc_priv(sch);
int prio;
for (prio = 0; prio < q->bands; prio++) {
struct Qdisc *qdisc = q->queues[prio];
struct sk_buff *skb = qdisc_dequeue_peeked(qdisc);
if (skb) {
qdisc_bstats_update(sch, skb);
sch->q.qlen--;
return skb;
}
}
return NULL;
}
Some level of fairness between service classes is needed too. My most
common setting for the "cake" shaper has been 20% minimum for the
background traffic, for example. I am unsure if codel is really the right
thing for the highest priority qdisc, as everything
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--
Dave Täht
Fixing bufferbloat with cerowrt:
http://www.teklibre.com/cerowrt/subscribe.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/codel/attachments/20130508/4e405498/attachment-0002.html>
More information about the Codel
mailing list