General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* Re: [Bloat] troubles with congestion (tbf vs htb)
       [not found] <20120309215037.GI2539@paperino>
@ 2012-03-09 22:08 ` Dave Taht
  2012-03-10 10:30   ` Davide Gerhard
  0 siblings, 1 reply; 2+ messages in thread
From: Dave Taht @ 2012-03-09 22:08 UTC (permalink / raw)
  To: Davide Gerhard, bloat

Dear Davide:

sounds like a job for the bloat list. I note your attachment got
filtered out on your
posting to netdev.

None of what you describe surprises me, but I would love to duplicate
your tests, exactly, against the new 3.3 kernel, which has BQL and
various active AQMs, and I also remember various things
around ssthresh being fiddled with over the past year.

On Fri, Mar 9, 2012 at 1:50 PM, Davide Gerhard <rainbow@irh.it> wrote:
> Hi,
> I am a master's student from the university of Trento, I have been doing a
> project, for the course of advanced networking (In a group of 2), focused
> on the TCP congestion control. I used tc with htb to simulate a link with
> 10mbit/s using a 100mbit/s real ethernet lan. Here is the code I used:
>
> tc qdisc add dev $INTF root handle 1: netem $DELAY $LOSS $DUPLICATE
>  $CORRUPT $REORDENING
> tc qdisc add dev $INTF parent 1:1 handle 10: htb default 1 r2q 10
> tc class add dev $INTF parent 10: classid 0:1 htb rate ${BANDW}kbit ceil
>  ${BANDW}kbit
>
> and here is the topology
>
> client -->|    |--> server with iperf -s
>          |    |
>          |    |
>          +    +
>           eth0
>    CONGESTION machine
>
> The congestion machine have the following configurations:
> - kernel 3.0
> - echo 1 > /proc/sys/net/ipv4/ip_forward
> - echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects
> - echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects
> - echo 1 > /proc/sys/net/ipv4/ip_no_pmtu_disc
> - echo 0 > /proc/sys/net/ipv4/conf/eth0/send_redirects
>
> The client captures the window size and ssthresh with tcp_flow_spy but we do
> not see any changes in the ssthresh and the window size is too large
> compared to the bandwidth*latency product (see attachment). In a normal scenario,
> this would be acceptable (I guess), but in order to obtain some relevant
> results for our work, we need to avoid this "buffer" and to activate
> the ssthresh. I have already tried to change backlog but this does
> not change anything. I have also tried to use tbf with the following command:
>
> tc qdisc add dev $INTF parent 1:1 handle 10: ftb rate ${BANDW}kbit burst 10kb
>  latency 1.2ms minburst 1540
>
> In this case, the congestion works correctly as we expect, but if we use
> netem I have to recalculate again all the needed values (correct?). Are there
> any other solutions?
>
> Best regards.
> /davide
>
> P.S Here follows the sysctl parameters used in the client:
> net.ipv4.tcp_no_metrics_save=1
> net.ipv4.tcp_sack=1
> net.ipv4.tcp_dsack=1
>
> --
> "The abdomen, the chest, and the brain will forever be shut from the intrusion
> of the wise and humane surgeon." - Sir John Eric Ericksen, British surgeon,
> appointed Surgeon-Extraordinary to Queen Victoria 1873
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://www.bufferbloat.net

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [Bloat] troubles with congestion (tbf vs htb)
  2012-03-09 22:08 ` [Bloat] troubles with congestion (tbf vs htb) Dave Taht
@ 2012-03-10 10:30   ` Davide Gerhard
  0 siblings, 0 replies; 2+ messages in thread
From: Davide Gerhard @ 2012-03-10 10:30 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

=- Dave Taht wrote on Fri  9.Mar'12 at 14:08:55 -0800 -=

> Dear Davide:

thank you for the answer.

> 
> sounds like a job for the bloat list. I note your attachment got
> filtered out on your
> posting to netdev.

yes. In any case, the description is enough good to understand the problem (i
hope).

> 
> None of what you describe surprises me, but I would love to duplicate
> your tests, exactly, against the new 3.3 kernel, which has BQL and
> various active AQMs, and I also remember various things
> around ssthresh being fiddled with over the past year.

Like you suggested, I will try with 3.3.0-rc6 (next week).
In the meantime I want to deepen the argument, because I have read some papers,
from this[0] page, and from others[1][2] but I didn't understood how can I use these
changes. In the sense that, all changes is it embedded into the kernel and I
will use the same tc rules or I need to change something? (CONFIG_BQL, ...)
Moreover, have you some suggestions about the test environment and the commands
that I used?

thanks
/davide

[0] http://www.phoronix.com/scan.php?page=news_item&px=MTAzODg
[1] http://lwn.net/Articles/469652/
[2] http://lwn.net/Articles/454378/


> 
> On Fri, Mar 9, 2012 at 1:50 PM, Davide Gerhard <rainbow@irh.it> wrote:
> > Hi,
> > I am a master's student from the university of Trento, I have been doing a
> > project, for the course of advanced networking (In a group of 2), focused
> > on the TCP congestion control. I used tc with htb to simulate a link with
> > 10mbit/s using a 100mbit/s real ethernet lan. Here is the code I used:
> >
> > tc qdisc add dev $INTF root handle 1: netem $DELAY $LOSS $DUPLICATE
> >  $CORRUPT $REORDENING
> > tc qdisc add dev $INTF parent 1:1 handle 10: htb default 1 r2q 10
> > tc class add dev $INTF parent 10: classid 0:1 htb rate ${BANDW}kbit ceil
> >  ${BANDW}kbit
> >
> > and here is the topology
> >
> > client -->|    |--> server with iperf -s
> >          |    |
> >          |    |
> >          +    +
> >           eth0
> >    CONGESTION machine
> >
> > The congestion machine have the following configurations:
> > - kernel 3.0
> > - echo 1 > /proc/sys/net/ipv4/ip_forward
> > - echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects
> > - echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects
> > - echo 1 > /proc/sys/net/ipv4/ip_no_pmtu_disc
> > - echo 0 > /proc/sys/net/ipv4/conf/eth0/send_redirects
> >
> > The client captures the window size and ssthresh with tcp_flow_spy but we do
> > not see any changes in the ssthresh and the window size is too large
> > compared to the bandwidth*latency product (see attachment). In a normal scenario,
> > this would be acceptable (I guess), but in order to obtain some relevant
> > results for our work, we need to avoid this "buffer" and to activate
> > the ssthresh. I have already tried to change backlog but this does
> > not change anything. I have also tried to use tbf with the following command:
> >
> > tc qdisc add dev $INTF parent 1:1 handle 10: ftb rate ${BANDW}kbit burst 10kb
> >  latency 1.2ms minburst 1540
> >
> > In this case, the congestion works correctly as we expect, but if we use
> > netem I have to recalculate again all the needed values (correct?). Are there
> > any other solutions?
> >
> > Best regards.
> > /davide
> >
> > P.S Here follows the sysctl parameters used in the client:
> > net.ipv4.tcp_no_metrics_save=1
> > net.ipv4.tcp_sack=1
> > net.ipv4.tcp_dsack=1
> >
> > --
> > "The abdomen, the chest, and the brain will forever be shut from the intrusion
> > of the wise and humane surgeon." - Sir John Eric Ericksen, British surgeon,
> > appointed Surgeon-Extraordinary to Queen Victoria 1873
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
> -- 
> Dave Täht
> SKYPE: davetaht
> US Tel: 1-239-829-5608
> http://www.bufferbloat.net

-- 
"Man will never reach the moon regardless of all future scientific advances." - 
Dr. Lee DeForest, "Father of Radio & Grandfather of Television"

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-03-10 10:31 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20120309215037.GI2539@paperino>
2012-03-09 22:08 ` [Bloat] troubles with congestion (tbf vs htb) Dave Taht
2012-03-10 10:30   ` Davide Gerhard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox