General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] Applying RED93 in south africa
@ 2011-05-21 14:27 Dave Taht
  2011-05-21 19:11 ` Jonathan Morton
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Dave Taht @ 2011-05-21 14:27 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 1712 bytes --]

The default qos-scripts for openwrt are being  tested in south africa right
now as part of the first bismark 'capetown' deployment of a whole bunch of
wndr3700v2 routers.

Bismark contains extensive debloating of the ar71xx, and ath9k device
drivers and
shortened txqueues.

On the plus side, the qos-scripts hold latencies down below 400ms for
priority traffic.

On the minus sides, I'm not seeing red kick in (no packet loss to speak of),
ecn is not being negotiated on tcp connections to SA(??), and single stream
downloads are at about 3/4 of the overall bandwidth available.

I would be very interested in a little analysis of the packet captures and
data contained in bug:

http://www.bufferbloat.net/issues/171

and email thread:

https://lists.bufferbloat.net/pipermail/bismark-devel/2011-May/000177.html

I've also setup a wndr3700v2 box in Georgia with these QoS settings in
place, and some big files worth downloading. It is temporarily at:

http://gw.lab.bufferbloat.net/capetown/capetown-wndr3700v2/

Experience the pain of the Internet on another continent! (note that the gw
is up on ipv6 as well)

(if you merely want a copy of the near final capetown release of bismark for
a wndr3700v2,
 you can download it without the simulated pain, at:

http://mirrors.projectbismark.net/downloads/capetown/capetown-wndr3700v2/

only the "v2" is supported.

)

SFB is also in this release, but lacking good scripts for it...

So, out of the 100+ papers on RED93, which one applies best in this
situation? does RED93 drop packets properly when ECN is not available? Etc.


-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://the-edge.blogspot.com

[-- Attachment #2: Type: text/html, Size: 2204 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] Applying RED93 in south africa
  2011-05-21 14:27 [Bloat] Applying RED93 in south africa Dave Taht
@ 2011-05-21 19:11 ` Jonathan Morton
  2011-05-21 19:29   ` Dave Taht
  2011-05-28 20:02 ` Juliusz Chroboczek
  2011-05-28 20:07 ` Juliusz Chroboczek
  2 siblings, 1 reply; 13+ messages in thread
From: Jonathan Morton @ 2011-05-21 19:11 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat


On 21 May, 2011, at 5:27 pm, Dave Taht wrote:

> Experience the pain of the Internet on another continent! (note that the gw is up on ipv6 as well)

Well, it's not very fast in terms of throughput, but the latency seems to be about as good as I get normally.  But perhaps that's because I'm starting from Northern Europe and so I'm already used to intercontinental traffic due to the prevalence of US-based servers.

I do see occasional brief stalls during the download, but these are substantially less intrusive than what I get on my 3G modem.  They suggest that packets are being dropped at reasonably regular intervals, but the TCP is recovering quickly.  I can't tell whether RED is triggering (without ECN) or whether these are tail-drops on a fairly short queue.

Incidentally my download is coming across IPv6, so it may be triggering the related Linux bug.  This shouldn't totally disable the negotiation though, so more likely there's a broken router in the way.

 - Jonathan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] Applying RED93 in south africa
  2011-05-21 19:11 ` Jonathan Morton
@ 2011-05-21 19:29   ` Dave Taht
  0 siblings, 0 replies; 13+ messages in thread
From: Dave Taht @ 2011-05-21 19:29 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1736 bytes --]

On Sat, May 21, 2011 at 1:11 PM, Jonathan Morton <chromatix99@gmail.com>wrote:

>
> On 21 May, 2011, at 5:27 pm, Dave Taht wrote:
>
> > Experience the pain of the Internet on another continent! (note that the
> gw is up on ipv6 as well)
>
> Well, it's not very fast in terms of throughput, but the latency seems to
> be about as good as I get normally.


It's simulating conditions in South Africa, with settings to 840 down/380
up.


>  But perhaps that's because I'm starting from Northern Europe and so I'm
> already used to intercontinental traffic due to the prevalence of US-based
> servers.
>
> I do see occasional brief stalls during the download, but these are
> substantially less intrusive than what I get on my 3G modem.  They suggest
> that packets are being dropped at reasonably regular intervals, but the TCP
> is recovering quickly.  I can't tell whether RED is triggering (without ECN)
> or whether these are tail-drops on a fairly short queue.
>
> Incidentally my download is coming across IPv6, so it may be triggering the
> related Linux bug.


Which one? on my side all known bugs are fixed. :)




> This shouldn't totally disable the negotiation though, so more likely
> there's a broken router in the way.
>
>
And it is highly likely you are interacting with slightly different layer of
QoS if you are using IPv6.

Could you take a tcpdump  capture and stick it somewhere? Or let me know
when you will be running a test and I'll capture traces from here? I did see
the wan light flicker madly a few minutes ago....

How be your latency under load too?

 - Jonathan
>
>


-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://the-edge.blogspot.com

[-- Attachment #2: Type: text/html, Size: 2684 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] Applying RED93 in south africa
  2011-05-21 14:27 [Bloat] Applying RED93 in south africa Dave Taht
  2011-05-21 19:11 ` Jonathan Morton
@ 2011-05-28 20:02 ` Juliusz Chroboczek
  2011-05-31 15:02   ` Jim Gettys
  2011-05-28 20:07 ` Juliusz Chroboczek
  2 siblings, 1 reply; 13+ messages in thread
From: Juliusz Chroboczek @ 2011-05-28 20:02 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

> ecn is not being negotiated on tcp connections to SA(??),

That's unfortunately pretty common.  A common practice is to clear any
priority information in packets coming into your network; unfortunately,
a lot of routers clear the whole DSCP byte, including the ECN bits.

-- Juliusz

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] Applying RED93 in south africa
  2011-05-21 14:27 [Bloat] Applying RED93 in south africa Dave Taht
  2011-05-21 19:11 ` Jonathan Morton
  2011-05-28 20:02 ` Juliusz Chroboczek
@ 2011-05-28 20:07 ` Juliusz Chroboczek
  2011-05-28 20:16   ` Dave Taht
  2011-05-28 20:59   ` [Bloat] SFB tuning (was Re: Applying RED93 in south africa) Otto Solares Cabrera
  2 siblings, 2 replies; 13+ messages in thread
From: Juliusz Chroboczek @ 2011-05-28 20:07 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

> SFB is also in this release, but lacking good scripts for it...

SFB is supposed to be self-tuning, so it should be enough to say
something like:

  #!/bin/sh
  set -e

  if=${1:-eth0}

  tc -s qdisc del root dev $if 2>/dev/null || true
  tc -s qdisc add dev $if root handle 1: tbf ...
  tc -s qdisc add dev $if parent 1: handle 2: sfb

However, I may have made the SFB defaults a little bit too conservative
(leading to high stability but slow convergence), so you may want to
make it a little bit more aggressive by replacing the last line with:

  tc -s qdisc add dev $if parent 1: handle 2: sfb target 20 max 25 increment 0.005 decrement 0.0001

-- Juliusz

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] Applying RED93 in south africa
  2011-05-28 20:07 ` Juliusz Chroboczek
@ 2011-05-28 20:16   ` Dave Taht
  2011-05-28 20:30     ` Juliusz Chroboczek
  2011-05-28 20:59   ` [Bloat] SFB tuning (was Re: Applying RED93 in south africa) Otto Solares Cabrera
  1 sibling, 1 reply; 13+ messages in thread
From: Dave Taht @ 2011-05-28 20:16 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1230 bytes --]

On Sat, May 28, 2011 at 2:07 PM, Juliusz Chroboczek <jch@pps.jussieu.fr>wrote:

> > SFB is also in this release, but lacking good scripts for it...
>
> SFB is supposed to be self-tuning, so it should be enough to say
> something like:
>
>  #!/bin/sh
>  set -e
>
>  if=${1:-eth0}
>
>  tc -s qdisc del root dev $if 2>/dev/null || true
>  tc -s qdisc add dev $if root handle 1: tbf ...
>  tc -s qdisc add dev $if parent 1: handle 2: sfb
>
> However, I may have made the SFB defaults a little bit too conservative
> (leading to high stability but slow convergence), so you may want to
> make it a little bit more aggressive by replacing the last line with:
>
>  tc -s qdisc add dev $if parent 1: handle 2: sfb target 20 max 25 increment
> 0.005 decrement 0.0001
>

regretablly the SFB patches to tc didn't make this release of 'bismark
captown', just the SFB kernel backport to 2.6.37.6.

But as soon as I/someone can either get iproute 2.6.39 ported to openwrt, or
backport those patches from net-next, I look forward very much to trying SFB
in the lab and in some real world scenarios.



> -- Juliusz
>



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://the-edge.blogspot.com

[-- Attachment #2: Type: text/html, Size: 1857 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] Applying RED93 in south africa
  2011-05-28 20:16   ` Dave Taht
@ 2011-05-28 20:30     ` Juliusz Chroboczek
  0 siblings, 0 replies; 13+ messages in thread
From: Juliusz Chroboczek @ 2011-05-28 20:30 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

> regretablly the SFB patches to tc didn't make this release of 'bismark
> captown', just the SFB kernel backport to 2.6.37.6.

You probably already know that -- but you do *not* need to patch tc in
order to run SFB with the default parameters.  (You need to patch tc if
you want to run SFB with non-default parameters, or to monitor its
statistics.)

-- Juliusz

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Bloat] SFB tuning (was Re:  Applying RED93 in south africa)
  2011-05-28 20:07 ` Juliusz Chroboczek
  2011-05-28 20:16   ` Dave Taht
@ 2011-05-28 20:59   ` Otto Solares Cabrera
  2011-05-29 15:29     ` [Bloat] SFB tuning Juliusz Chroboczek
  1 sibling, 1 reply; 13+ messages in thread
From: Otto Solares Cabrera @ 2011-05-28 20:59 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: bloat

On Sat, May 28, 2011 at 10:07:10PM +0200, Juliusz Chroboczek wrote:
> > SFB is also in this release, but lacking good scripts for it...
> 
> SFB is supposed to be self-tuning, so it should be enough to say
> something like:
> 
>   #!/bin/sh
>   set -e
> 
>   if=${1:-eth0}
> 
>   tc -s qdisc del root dev $if 2>/dev/null || true
>   tc -s qdisc add dev $if root handle 1: tbf ...
>   tc -s qdisc add dev $if parent 1: handle 2: sfb
> 
> However, I may have made the SFB defaults a little bit too conservative
> (leading to high stability but slow convergence), so you may want to
> make it a little bit more aggressive by replacing the last line with:
> 
>   tc -s qdisc add dev $if parent 1: handle 2: sfb target 20 max 25 increment 0.005 decrement 0.0001

Hello Juliusz,

I'm using SFB on a production env, on the external interface to the
Internet (100Mbps ethernet capped to 70Mbps by the ISP):

tc qdisc add dev eth4 parent 1:3  handle 13:  sfb hash-type source limit 100 target 10 max 15 penalty_rate 60

And on the internal interfaces (1Gbps ethernet) to clients like this:

tc qdisc add dev ${DEV} parent 50:20 handle 52: sfb hash-type dest limit 100 target 10 max 15 penalty_rate 100

Everything is working stable and I would like a recommendation if it
looks fine or if something could be tuned?

Thank you!
-
 Otto

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] SFB tuning
  2011-05-28 20:59   ` [Bloat] SFB tuning (was Re: Applying RED93 in south africa) Otto Solares Cabrera
@ 2011-05-29 15:29     ` Juliusz Chroboczek
  2011-05-30  0:52       ` Otto Solares Cabrera
  0 siblings, 1 reply; 13+ messages in thread
From: Juliusz Chroboczek @ 2011-05-29 15:29 UTC (permalink / raw)
  To: Otto Solares Cabrera; +Cc: bloat

> Internet (100Mbps ethernet capped to 70Mbps by the ISP):
>
> tc qdisc add dev eth4 parent 1:3  handle 13:  sfb hash-type source limit 100 target 10 max 15 penalty_rate 60
>
> And on the internal interfaces (1Gbps ethernet) to clients like this:
>
> tc qdisc add dev ${DEV} parent 50:20 handle 52: sfb hash-type dest limit 100 target 10 max 15 penalty_rate 100

Looks good to me.

You may want to increase limit and penalty -- remember that these values
are shared between all clients.  (Are you seeing any queuedrop?)

You may also want to experiment with increasing increment/decrement --
increment should be roughly 5 times larger than decrement, and the
values should be as large as you can make them without seeing
oscillations.  (Larger values yield faster convergence, but may cause
overshoot.)

You may also want to encourage your clients to enable ECN.

May I see the output of ``tc -s qdisc show''?

-- Juliusz

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] SFB tuning
  2011-05-29 15:29     ` [Bloat] SFB tuning Juliusz Chroboczek
@ 2011-05-30  0:52       ` Otto Solares Cabrera
  2011-05-30 22:05         ` Juliusz Chroboczek
  0 siblings, 1 reply; 13+ messages in thread
From: Otto Solares Cabrera @ 2011-05-30  0:52 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: bloat

On Sun, May 29, 2011 at 05:29:10PM +0200, Juliusz Chroboczek wrote:
> > Internet (100Mbps ethernet capped to 70Mbps by the ISP):
> >
> > tc qdisc add dev eth4 parent 1:3  handle 13:  sfb hash-type source limit 100 target 10 max 15 penalty_rate 60
> >
> > And on the internal interfaces (1Gbps ethernet) to clients like this:
> >
> > tc qdisc add dev ${DEV} parent 50:20 handle 52: sfb hash-type dest limit 100 target 10 max 15 penalty_rate 100
> 
> Looks good to me.

Thank you!

> You may want to increase limit and penalty -- remember that these values
> are shared between all clients.  (Are you seeing any queuedrop?)

Ok will increase that, when I discover bufferbloat in my interfaces
(all of them with pfifo_fast and txqueuelen 4000 plus max in tx ring)
I reduced to min in tx ring and to 10 in txqueuelen with very good
results but now with proper AQM I think I could increase all buffers
for good?

> You may also want to experiment with increasing increment/decrement --
> increment should be roughly 5 times larger than decrement, and the
> values should be as large as you can make them without seeing
> oscillations.  (Larger values yield faster convergence, but may cause
> overshoot.)

Ok, will do that.

> You may also want to encourage your clients to enable ECN.

Servers have it enabled, but I discover that I clear in my routers
iptables the DSCP field as you acurately mention it in another post,
for clients it's very hard as it's a very large network in my
University.

> May I see the output of ``tc -s qdisc show''?

Sure, just note that my current 'QoS' scheme involves some classes for
realtime, others for shaping and lastly SFB for fairness.

In advace thank you for taking a look to all this data that my
brain can't cope :)

eth4 (Internet 70Mbps):
qdisc prio 1: root refcnt 2 bands 3 priomap  2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 Sent 11135566140550 bytes 1231458943 pkt (dropped 7284219, overlimits 0 requeues 2683938) 
 rate 0bit 0pps backlog 0b 0p requeues 2683938 
qdisc sfb 13: parent 1:3 hash source limit 100 max 15 target 10
  increment 0.00050 decrement 0.00005 penalty rate 60 burst 20 (600s 60s 6x32)
 Sent 10466924374664 bytes 3212112668 pkt (dropped 5891604, overlimits
3098159 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
  earlydrop 2793445 penaltydrop 0 bucketdrop 3098159 queuedrop 0 marked 555
  maxqlen 0 maxprob 0.00000
qdisc sfq 131: parent 13: limit 127p quantum 1514b flows 127/1024 perturb 10sec 
 Sent 10466924365580 bytes 3212112662 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 

eth2 (wired LAN 1Gbps)
qdisc prio 1: root refcnt 2 bands 5 priomap  4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
 Sent 2483998367057 bytes 2866500587 pkt (dropped 6910353, overlimits 0 requeues 75576) 
 rate 0bit 0pps backlog 0b 0p requeues 75576 
qdisc htb 50: parent 1:5 r2q 25 default 30 direct_packets_stat 0 ver 3.17
 Sent 680549233437 bytes 496435880 pkt (dropped 6832111, overlimits 506216214 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
qdisc sfb 52: parent 50:20 hash dest limit 100 max 15 target 10
  increment 0.00050 decrement 0.00005 penalty rate 100 burst 20 (600s 60s 6x32)
 Sent 476083684999 bytes 335250357 pkt (dropped 5167392, overlimits 1190208 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
  earlydrop 3977184 penaltydrop 0 bucketdrop 1190208 queuedrop 0 marked 416
  maxqlen 0 maxprob 0.00000
qdisc sfq 521: parent 52: limit 127p quantum 9014b flows 127/1024 perturb 10sec 
 Sent 476083684999 bytes 335250357 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
qdisc sfb 53: parent 50:30 hash dest limit 100 max 15 target 10
  increment 0.00050 decrement 0.00005 penalty rate 100 burst 20 (600s 60s 6x32)
 Sent 165013253442 bytes 125931567 pkt (dropped 1615538, overlimits 229717 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
  earlydrop 1385821 penaltydrop 0 bucketdrop 229717 queuedrop 0 marked 0
  maxqlen 0 maxprob 0.00000
qdisc sfq 531: parent 53: limit 127p quantum 9014b flows 127/1024 perturb 10sec 
 Sent 165013253182 bytes 125931564 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 

eth6 (wireless 802.11bgn)
qdisc prio 1: root refcnt 2 bands 5 priomap  4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
 Sent 6677917818544 bytes 1754047115 pkt (dropped 63616539, overlimits 0 requeues 2993) 
 rate 0bit 0pps backlog 0b 0p requeues 2993 
qdisc htb 50: parent 1:5 r2q 25 default 30 direct_packets_stat 0 ver 3.17
 Sent 5139577338863 bytes 3714353622 pkt (dropped 63614459, overlimits 967897712 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
qdisc sfb 52: parent 50:20 hash dest limit 100 max 15 target 10
  increment 0.00050 decrement 0.00005 penalty rate 100 burst 20 (600s 60s 6x32)
 Sent 3036033504418 bytes 2069667963 pkt (dropped 54627811, overlimits 11816962 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
  earlydrop 42810849 penaltydrop 0 bucketdrop 11795286 queuedrop 0 marked 4918
  maxqlen 0 maxprob 0.00000
qdisc sfq 521: parent 52: limit 127p quantum 1514b flows 127/1024 perturb 10sec 
 Sent 3036033504418 bytes 2069667963 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
qdisc sfb 53: parent 50:30 hash dest limit 100 max 15 target 10
  increment 0.00050 decrement 0.00005 penalty rate 100 burst 20 (600s 60s 6x32)
 Sent 973169683987 bytes 746581178 pkt (dropped 5858989, overlimits 1292169 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 
  earlydrop 4566820 penaltydrop 0 bucketdrop 1287799 queuedrop 0 marked 18884
  maxqlen 0 maxprob 0.00000
qdisc sfq 531: parent 53: limit 127p quantum 1514b flows 127/1024 perturb 10sec 
 Sent 973169683987 bytes 746581178 pkt (dropped 0, overlimits 0 requeues 0) 
 rate 0bit 0pps backlog 0b 0p requeues 0 

Thank you!
-
 Otto

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] SFB tuning
  2011-05-30  0:52       ` Otto Solares Cabrera
@ 2011-05-30 22:05         ` Juliusz Chroboczek
  2011-05-30 23:37           ` Otto Solares Cabrera
  0 siblings, 1 reply; 13+ messages in thread
From: Juliusz Chroboczek @ 2011-05-30 22:05 UTC (permalink / raw)
  To: Otto Solares Cabrera; +Cc: bloat

Thanks a lot for the data.

>> (Are you seeing any queuedrop?)

Okay, both queuedrop and penaltydrop are 0 on all interfaces, which
means that you don't overflow any buffers and that the penalty box is
not being used -- tweaking limit and the penalty rate will have no effect.

I don't understand what you're doing on eth6, which has both prio and htb.

You're systematically putting sfq below sfb.  You should be aware that
since sfb keeps the queues short, the effect of sfq is reduced somewhat
-- you may not be getting all the fairness you're expecting.

Your packet loss rates are

eth4 (Internet): 0.6%
eth2 (LAN): 0.2%
eth6 (Wifi): 3.6%

Only eth6 is congested.  Three quarters of the eth6 drops are in sfb
52:.  There's 3.6 times more earlydrop than bucketdrop, which seems okay
to me.  Increasing increment/decrement might reduce the bucketdrop
somewhat; so would increasing the target, at the cost of increasing the
amount of queueing.

Thanks again for the data,

-- Juliusz

P.S.  Wow !  Guatemala !

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] SFB tuning
  2011-05-30 22:05         ` Juliusz Chroboczek
@ 2011-05-30 23:37           ` Otto Solares Cabrera
  0 siblings, 0 replies; 13+ messages in thread
From: Otto Solares Cabrera @ 2011-05-30 23:37 UTC (permalink / raw)
  To: Juliusz Chroboczek; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1819 bytes --]

On Tue, May 31, 2011 at 12:05:51AM +0200, Juliusz Chroboczek wrote:
> I don't understand what you're doing on eth6, which has both prio and htb.
> 
> You're systematically putting sfq below sfb.  You should be aware that
> since sfb keeps the queues short, the effect of sfq is reduced somewhat
> -- you may not be getting all the fairness you're expecting.

Basically my University main router runs Linux with 9 GigE NICs to
different networks (we can't afford expensive routers :) ).

My idea was for the Internet interface to use 2 queues (or bands in
prio parlance) one for realtime or very important traffic and the
other for rest of traffic doing SFB (this is upload to Internet).

For every clients network I used 5 queues being the last one the
default, shaped and "fairnessed" (download from the Internet).

The WiFi network is a client network connected to GigE switches which
in turn connect to 125 Linux APs (WRT160NL with OpenWRT) in the entire
campus.

I've attached my "QoS" scripts so you can form an idea why some things
are done that way but I know is too much asking to take a look, my AQM
or QoS setup is a little elaborate.

("unhandled" bands in the prio qdisc are plain pfifo with qlen 10,
ip_qos is the main script which calls ip_qos_lan for every client net).

> Your packet loss rates are
> 
> eth4 (Internet): 0.6%
> eth2 (LAN): 0.2%
> eth6 (Wifi): 3.6%
> 
> Only eth6 is congested.  Three quarters of the eth6 drops are in sfb
> 52:.  There's 3.6 times more earlydrop than bucketdrop, which seems okay
> to me.  Increasing increment/decrement might reduce the bucketdrop
> somewhat; so would increasing the target, at the cost of increasing the
> amount of queueing.
> 
> Thanks again for the data,

Thank you for your time and analysis!

> P.S.  Wow !  Guatemala !

You're welcome!
-
 Otto

[-- Attachment #2: ip_qos --]
[-- Type: text/plain, Size: 4639 bytes --]

#!/bin/sh
#
# ip_qos
#
# UG QoS implementation
#
# Copyright (C)2009-2011, Universidad Galileo
# Otto Solares <solca@galileo.edu>


###############
# definitions #
###############
# Import definitions
. /etc/network/ip_defs


/etc/network/ip_qos_lan eth0 stop
/etc/network/ip_qos_lan eth1 stop
/etc/network/ip_qos_lan eth2 stop
/etc/network/ip_qos_lan eth3 stop
/etc/network/ip_qos_lan eth4 stop
/etc/network/ip_qos_lan eth5 stop
/etc/network/ip_qos_lan eth6 stop
/etc/network/ip_qos_lan eth7 stop
/etc/network/ip_qos_lan eth8 stop


if [ "$1" = "stop" ]; then
        exit
fi


#######
# QoS #
#######

# LAN networks

${TC} qdisc add dev eth0 root handle 1: prio bands 2 priomap 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
# simple small packets (<128)
${TC} filter add dev eth0 parent 1: protocol ip prio 1 u32 \
   match u8 0x05 0x0f at 0 \
   match u16 0x0000 0xff80 at 2 \
   flowid 1:1
# services
${TC} filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip dst 10.0.0.13  match ip dport 24   0xffff flowid 1:1
${TC} filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip protocol 17 0xff match ip dst 10.0.0.6   flowid 1:1

/etc/network/ip_qos_lan eth1 30000
/etc/network/ip_qos_lan eth2 30000
/etc/network/ip_qos_lan eth3 30000
/etc/network/ip_qos_lan eth6 30000


# WAN networks

# claro main
${TC} qdisc add dev eth4 root        handle 1:   prio bands 3 priomap 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
#${TC} qdisc add dev eth4 parent 1:2 handle 20:  red limit 5000000 min 208333 max 625000 avpkt 1000 burst 347 probability 0.02 bandwidth 25000 ecn
${TC} qdisc add dev eth4 parent 1:3  handle 13:  sfb hash-type source limit 100 target 10 max 15 penalty_rate 60
${TC} qdisc add dev eth4 parent 13:  handle 131: sfq perturb 10

# simple small packets (<128)
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 \
   match u8 0x05 0x0f at 0 \
   match u16 0x0000 0xff80 at 2 \
   flowid 1:1
# services
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip sport 22  0xffff flowid 1:1
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip dport 22  0xffff flowid 1:1
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 match ip protocol 17 0xff match ip src 200.9.255.13  flowid 1:1
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip src 200.9.255.13  match ip sport   24 0xffff flowid 1:1
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 match ip protocol 17 0xff match ip src 200.9.255.6   flowid 1:1
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 match ip protocol 17 0xff match ip src 200.9.255.69  flowid 1:1
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip src 200.9.255.69  match ip sport  554 0xffff flowid 1:1
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip src 200.9.255.69  match ip sport 1755 0xffff flowid 1:1
${TC} filter add dev eth4 parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip src 200.9.255.152 match ip sport 3389 0xffff flowid 1:1
# google-claro cache netblock
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 200.6.228.0/24   flowid 1:2
# google netblock
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 216.239.32.0/19  flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 64.233.160.0/19  flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 66.249.80.0/20   flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 72.14.192.0/18   flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 209.85.128.0/17  flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 66.102.0.0/20    flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 74.125.0.0/16    flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 64.18.0.0/20     flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 207.126.144.0/20 flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 173.194.0.0/16   flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 216.73.93.70/31  flowid 1:2
${TC} filter add dev eth4 parent 1: protocol ip prio 2 u32 match ip dst 216.73.93.72/31  flowid 1:2

# other physical links
${TC} qdisc add dev eth5 root pfifo
${TC} qdisc add dev eth7 root pfifo
${TC} qdisc add dev eth8 root pfifo


exit 0

[-- Attachment #3: ip_qos_lan --]
[-- Type: text/plain, Size: 7335 bytes --]

#!/bin/sh
#
# ip_qos
#
# UG QoS implementation
# slightly based on WonderShaper
#
# Copyright (C)2009-2011, Universidad Galileo
# Otto Solares <solca@galileo.edu>
#
# Egress queues:
# 1. real-time priorities
# 2. internal LAN to LAN
# 3. external WAN (Internet) to LAN (unshaped)
# 4. google netblocks (unshaped)
# 5. external WAN (Internet) to LAN (shaped)

###############
# definitions #
###############
# Import definitions
. /etc/network/ip_defs


DEV=$1
# bandwidth for queue 5 shaping
BANDWIDTH=$2


if [ -z "$2" ]; then
	exit 1
fi

if [ "$2" = "status" ]; then
	${TC} -s qdisc ls dev $DEV
	echo
	${TC} -s class ls dev $DEV
	exit
fi


${TC} qdisc del dev $DEV root    >/dev/null 2>&1
${TC} qdisc del dev $DEV ingress >/dev/null 2>&1


if [ "$2" = "stop" ]; then
	exit
fi


### egress qdiscs ###

# root egress qdisc
${TC} qdisc add dev ${DEV} root handle 1: prio bands 5 priomap 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4

# bands:
# 0 real-time
# 1 internal LAN to LAN
# 2 external WAN (Internet) to LAN unshaped
# 3 google netblock
# 4 external WAN (Internet) to LAN shaped

# external WAN (Internet) to LAN shaped
${TC} qdisc add dev ${DEV} parent 1:5	handle 50: htb default 30 r2q 25
${TC} class add dev ${DEV} parent 50:	classid 50:1  htb rate ${BANDWIDTH}kbit         ceil ${BANDWIDTH}kbit burst 2k
${TC} class add dev ${DEV} parent 50:1	classid 50:10 htb rate ${BANDWIDTH}kbit         ceil ${BANDWIDTH}kbit burst 2k prio 1
${TC} class add dev ${DEV} parent 50:1	classid 50:20 htb rate $[9*${BANDWIDTH}/10]kbit ceil ${BANDWIDTH}kbit burst 2k prio 2
${TC} class add dev ${DEV} parent 50:1	classid 50:30 htb rate $[5*${BANDWIDTH}/10]kbit ceil ${BANDWIDTH}kbit burst 2k prio 3
${TC} qdisc add dev ${DEV} parent 50:20	handle 52: sfb hash-type dest limit 100 target 10 max 15 penalty_rate 100
${TC} qdisc add dev ${DEV} parent 52:	handle 521: sfq perturb 10
${TC} qdisc add dev ${DEV} parent 50:30	handle 53: sfb hash-type dest limit 100 target 10 max 15 penalty_rate 100
${TC} qdisc add dev ${DEV} parent 53:	handle 531: sfq perturb 10


### classify filters ###

# real-time
# ICMP & TCP ACKs & small (<512) UDP/UDPlite
${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 \
   match u8 0x05 0x0f at 0 \
   match u16 0x0000 0xff80 at 2 \
   flowid 1:1
# voip.galileo.edu
${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol 17 0xff match ip src 10.0.0.6 flowid 1:1
# medialab.galileo.edu
${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol 17 0xff match ip src 192.168.15.10 flowid 1:1
${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip src 192.168.15.10 match ip sport  554 0xffff flowid 1:1
${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip src 192.168.15.10 match ip sport 1755 0xffff flowid 1:1
${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip src 192.168.15.10 match ip dport 7007 0xffff flowid 1:1
# home.galileo.edu
${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip src 192.168.0.4 match ip sport 22 0xffff flowid 1:1
if [ "${DEV}" == "eth3" ]; then
 # medialab.galileo.edu
 ${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol 17 0xff match ip dst 192.168.15.10 flowid 1:1
 ${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip dst 192.168.15.10 match ip dport  554 0xffff flowid 1:1
 ${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip dst 192.168.15.10 match ip dport 1755 0xffff flowid 1:1
 ${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip dst 192.168.15.10 match ip sport 7007 0xffff flowid 1:1
 # home.galileo.edu
 ${TC} filter add dev ${DEV} parent 1: protocol ip prio 1 u32 match ip protocol  6 0xff match ip dst 192.168.0.4 match ip dport 22 0xffff flowid 1:1
fi

# internal LAN to LAN
${TC} filter add dev ${DEV} parent 1: protocol ip prio 2 u32 match ip src 10.0.0.0/8     flowid 1:2
${TC} filter add dev ${DEV} parent 1: protocol ip prio 2 u32 match ip src 172.16.0.0/12  flowid 1:2
${TC} filter add dev ${DEV} parent 1: protocol ip prio 2 u32 match ip src 192.168.0.0/16 flowid 1:2
${TC} filter add dev ${DEV} parent 1: protocol ip prio 2 u32 match ip src 224.0.0.0/4    flowid 1:2

# external WAN (Internet) to LAN unshaped
# some WAN links must not be shaped
${TC} filter add dev ${DEV} parent 1: protocol ip prio 3 u32 match ip src 0.0.0.0/0 indev eth5 flowid 1:3
${TC} filter add dev ${DEV} parent 1: protocol ip prio 3 u32 match ip src 0.0.0.0/0 indev eth8 flowid 1:3
# netbooks
#if [ "${DEV}" == "eth6" ]; then
# ${TC} filter add dev ${DEV} parent 1: protocol ip prio 3 u32 match ip protocol 17 0xff match ip dst 10.1.1.120 flowid 1:3
# ${TC} filter add dev ${DEV} parent 1: protocol ip prio 3 u32 match ip protocol  6 0xff match ip dst 10.1.1.120 flowid 1:3
#fi

# google-claro cache netblock must not be shaped
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 200.6.228.0/24   flowid 1:4
# google netblocks must not be shaped
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 216.239.32.0/19  flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 64.233.160.0/19  flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 66.249.80.0/20   flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 72.14.192.0/18   flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 209.85.128.0/17  flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 66.102.0.0/20    flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 74.125.0.0/16    flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 64.18.0.0/20     flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 207.126.144.0/20 flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 173.194.0.0/16   flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 216.73.93.70/31  flowid 1:4
${TC} filter add dev ${DEV} parent 1: protocol ip prio 4 u32 match ip src 216.73.93.72/31  flowid 1:4

# everything else must be shaped
${TC} filter add dev ${DEV} parent 1: protocol ip prio 5 u32 match ip src 0.0.0.0/0 indev eth4 flowid 1:5

# shaped priorities
# IP ToS Minimize-Delay & UDP & UDPlite
${TC} filter add dev ${DEV} parent 50: protocol ip prio 1 u32 match ip tos      0x10 0xff flowid 50:10
${TC} filter add dev ${DEV} parent 50: protocol ip prio 1 u32 match ip protocol   17 0xff flowid 50:10
${TC} filter add dev ${DEV} parent 50: protocol ip prio 1 u32 match ip protocol  136 0xff flowid 50:10
# HTTP
${TC} filter add dev ${DEV} parent 50: protocol ip prio 1 u32 match ip sport 80  0xffff flowid 50:20
${TC} filter add dev ${DEV} parent 50: protocol ip prio 1 u32 match ip dport 80  0xffff flowid 50:20
# HTTPS
${TC} filter add dev ${DEV} parent 50: protocol ip prio 1 u32 match ip sport 443 0xffff flowid 50:20
${TC} filter add dev ${DEV} parent 50: protocol ip prio 1 u32 match ip dport 443 0xffff flowid 50:20

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Bloat] Applying RED93 in south africa
  2011-05-28 20:02 ` Juliusz Chroboczek
@ 2011-05-31 15:02   ` Jim Gettys
  0 siblings, 0 replies; 13+ messages in thread
From: Jim Gettys @ 2011-05-31 15:02 UTC (permalink / raw)
  To: bloat

On 05/28/2011 04:02 PM, Juliusz Chroboczek wrote:
>> ecn is not being negotiated on tcp connections to SA(??),
> That's unfortunately pretty common.  A common practice is to clear any
> priority information in packets coming into your network; unfortunately,
> a lot of routers clear the whole DSCP byte, including the ECN bits.
>
There is some data on this in Bauer and Beverly's Caida workshop results:

http://gettys.wordpress.com/2011/02/22/caida-workshop-aims-2011-bauer-and-beverly-ecn-results/

It looks like *most* networks don't clobber the bits, though some do.  
So far, the networks who clobber the bits have been helpful on fixing them.
                     - Jim


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2011-05-31 14:46 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-05-21 14:27 [Bloat] Applying RED93 in south africa Dave Taht
2011-05-21 19:11 ` Jonathan Morton
2011-05-21 19:29   ` Dave Taht
2011-05-28 20:02 ` Juliusz Chroboczek
2011-05-31 15:02   ` Jim Gettys
2011-05-28 20:07 ` Juliusz Chroboczek
2011-05-28 20:16   ` Dave Taht
2011-05-28 20:30     ` Juliusz Chroboczek
2011-05-28 20:59   ` [Bloat] SFB tuning (was Re: Applying RED93 in south africa) Otto Solares Cabrera
2011-05-29 15:29     ` [Bloat] SFB tuning Juliusz Chroboczek
2011-05-30  0:52       ` Otto Solares Cabrera
2011-05-30 22:05         ` Juliusz Chroboczek
2011-05-30 23:37           ` Otto Solares Cabrera

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox