On Tue, May 31, 2011 at 12:05:51AM +0200, Juliusz Chroboczek wrote: > I don't understand what you're doing on eth6, which has both prio and htb. > > You're systematically putting sfq below sfb. You should be aware that > since sfb keeps the queues short, the effect of sfq is reduced somewhat > -- you may not be getting all the fairness you're expecting. Basically my University main router runs Linux with 9 GigE NICs to different networks (we can't afford expensive routers :) ). My idea was for the Internet interface to use 2 queues (or bands in prio parlance) one for realtime or very important traffic and the other for rest of traffic doing SFB (this is upload to Internet). For every clients network I used 5 queues being the last one the default, shaped and "fairnessed" (download from the Internet). The WiFi network is a client network connected to GigE switches which in turn connect to 125 Linux APs (WRT160NL with OpenWRT) in the entire campus. I've attached my "QoS" scripts so you can form an idea why some things are done that way but I know is too much asking to take a look, my AQM or QoS setup is a little elaborate. ("unhandled" bands in the prio qdisc are plain pfifo with qlen 10, ip_qos is the main script which calls ip_qos_lan for every client net). > Your packet loss rates are > > eth4 (Internet): 0.6% > eth2 (LAN): 0.2% > eth6 (Wifi): 3.6% > > Only eth6 is congested. Three quarters of the eth6 drops are in sfb > 52:. There's 3.6 times more earlydrop than bucketdrop, which seems okay > to me. Increasing increment/decrement might reduce the bucketdrop > somewhat; so would increasing the target, at the cost of increasing the > amount of queueing. > > Thanks again for the data, Thank you for your time and analysis! > P.S. Wow ! Guatemala ! You're welcome! - Otto