* [Cerowrt-devel] Proper AQM settings for my connection?
@ 2013-12-21 4:32 Hector Ordorica
2013-12-21 5:01 ` Rich Brown
2013-12-21 10:32 ` Sebastian Moeller
0 siblings, 2 replies; 7+ messages in thread
From: Hector Ordorica @ 2013-12-21 4:32 UTC (permalink / raw)
To: cerowrt-devel
I'm running 3.10.13-2 on a WNDR3800, and have used the suggested
settings from the latest draft:
http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for_CeroWrt_310
I have a 30Mb down / 5Mb upload cable connection.
With fq_codel, even undershooting network upload bandwidth by more
than 95%, I'm seeing 500ms excessive upload buffering warnings from
netalyzr. Download is ok at 130ms. I was previously on a 3.8 release
and the same was true.
With pie (and default settings), the buffer warnings go away:
http://n2.netalyzr.icsi.berkeley.edu/summary/id=43ca208a-32182-9424fd6e-5c5f-42d7-a9ea
And the connection performs very well while torrenting and gaming.
Should I try new code? Or can I tweak some variables and/or delay
options in scripts for codel?
Thanks for your work,
Hector
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Cerowrt-devel] Proper AQM settings for my connection?
2013-12-21 4:32 [Cerowrt-devel] Proper AQM settings for my connection? Hector Ordorica
@ 2013-12-21 5:01 ` Rich Brown
2013-12-21 5:16 ` Dave Taht
2013-12-21 10:40 ` Sebastian Moeller
2013-12-21 10:32 ` Sebastian Moeller
1 sibling, 2 replies; 7+ messages in thread
From: Rich Brown @ 2013-12-21 5:01 UTC (permalink / raw)
To: Hector Ordorica; +Cc: cerowrt-devel
On Dec 20, 2013, at 11:32 PM, Hector Ordorica <hechacker1@gmail.com> wrote:
> I'm running 3.10.13-2 on a WNDR3800, and have used the suggested
> settings from the latest draft:
>
> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for_CeroWrt_310
>
> I have a 30Mb down / 5Mb upload cable connection.
>
> With fq_codel, even undershooting network upload bandwidth by more
> than 95%, I'm seeing 500ms excessive upload buffering warnings from
> netalyzr. Download is ok at 130ms. I was previously on a 3.8 release
> and the same was true.
I have seen the same thing, although with different CeroWrt firmware. Netalyzr was reporting
> 500 msec buffering in both directions.
However, I was simultaneously running a ping to Google during that Netalyzr run, and the
ping times started at ~55 msec before I started Netalyzr, and occasionally they would bump
up to 70 or 80 msec, but never the long times that Netzlyzr reported...
I also reported this to the Netalyzr mailing list and they didn’t seem surprised. I’m not sure how to interpret this.
> With pie (and default settings), the buffer warnings go away:
>
> http://n2.netalyzr.icsi.berkeley.edu/summary/id=43ca208a-32182-9424fd6e-5c5f-42d7-a9ea
>
> And the connection performs very well while torrenting and gaming.
>
> Should I try new code? Or can I tweak some variables and/or delay
> options in scripts for codel?
A couple thoughts:
- There have been a bunch of changes between 3.10.13-2 and the current version (3.10.24-5, which seems pretty stable). You might try upgrading. (See the “Rough Notes” at the bottom of http://www.bufferbloat.net/projects/cerowrt/wiki/CeroWrt_310_Release_Notes for the progression of changes).
- Have you tried a more aggressive decrease to the link speeds on the AQM page (say, 85% instead of 95%)?
- Can we get more corroboration from the list about the behavior of Netalyzer?
Rich
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Cerowrt-devel] Proper AQM settings for my connection?
2013-12-21 5:01 ` Rich Brown
@ 2013-12-21 5:16 ` Dave Taht
2013-12-21 5:38 ` Hector Ordorica
2013-12-21 10:40 ` Sebastian Moeller
1 sibling, 1 reply; 7+ messages in thread
From: Dave Taht @ 2013-12-21 5:16 UTC (permalink / raw)
To: Rich Brown; +Cc: Hector Ordorica, cerowrt-devel
Netanalyzr is inaccurate. It pushes out a udp stream for not long
enough fpr codel to react, thus giving you an over-estimate, and
furthermore doesn't detect the presence of flow queuing on the link by
sending a secondary flow. This latter problem in netanalyzer is
starting to bug me. They've known they don't detect SFQ, SQF, or
fq_codel or drr for a long time now, these packet schedulers are
deployed at the very least at FT and free.fr and probably quite a few
places more, and detecting it is straightforward.
Netanalyzr + a ping on the side is all that is needed to see
difference between bloat, aqm, and packet scheduling.
The rrul test is even better.
I would be interested in your pie results on the link...
netanalyzer + a ping -c 60 somewhere in both cases...
however... there WAS a lot of churn in the AQM code these past few
months, so it is possible you have a busted version of the aqm scripts
as well. a sample of your
tc -s qdisc show dev ge00
would be helpful. As rich says, 3.10.24-5 is pretty good at this
point, and a large number of people have installed it, with only a few
problems (We have a kernel issue that rose it's ugly head again
(instruction traps), and we are discussing improving the web interface
further).
So upgrade first.
On Fri, Dec 20, 2013 at 9:01 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>
> On Dec 20, 2013, at 11:32 PM, Hector Ordorica <hechacker1@gmail.com> wrote:
>
>> I'm running 3.10.13-2 on a WNDR3800, and have used the suggested
>> settings from the latest draft:
>>
>> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for_CeroWrt_310
>>
>> I have a 30Mb down / 5Mb upload cable connection.
>>
>> With fq_codel, even undershooting network upload bandwidth by more
>> than 95%, I'm seeing 500ms excessive upload buffering warnings from
>> netalyzr. Download is ok at 130ms. I was previously on a 3.8 release
>> and the same was true.
>
> I have seen the same thing, although with different CeroWrt firmware. Netalyzr was reporting
>> 500 msec buffering in both directions.
>
> However, I was simultaneously running a ping to Google during that Netalyzr run, and the
> ping times started at ~55 msec before I started Netalyzr, and occasionally they would bump
> up to 70 or 80 msec, but never the long times that Netzlyzr reported...
>
> I also reported this to the Netalyzr mailing list and they didn’t seem surprised. I’m not sure how to interpret this.
>
>> With pie (and default settings), the buffer warnings go away:
>>
>> http://n2.netalyzr.icsi.berkeley.edu/summary/id=43ca208a-32182-9424fd6e-5c5f-42d7-a9ea
>>
>> And the connection performs very well while torrenting and gaming.
>>
>> Should I try new code? Or can I tweak some variables and/or delay
>> options in scripts for codel?
>
> A couple thoughts:
>
> - There have been a bunch of changes between 3.10.13-2 and the current version (3.10.24-5, which seems pretty stable). You might try upgrading. (See the “Rough Notes” at the bottom of http://www.bufferbloat.net/projects/cerowrt/wiki/CeroWrt_310_Release_Notes for the progression of changes).
>
> - Have you tried a more aggressive decrease to the link speeds on the AQM page (say, 85% instead of 95%)?
>
> - Can we get more corroboration from the list about the behavior of Netalyzer?
>
> Rich
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Cerowrt-devel] Proper AQM settings for my connection?
2013-12-21 5:16 ` Dave Taht
@ 2013-12-21 5:38 ` Hector Ordorica
2013-12-21 5:47 ` Hector Ordorica
0 siblings, 1 reply; 7+ messages in thread
From: Hector Ordorica @ 2013-12-21 5:38 UTC (permalink / raw)
To: cerowrt-devel
Interesting, I'll upgrade as soon as I have the chance to reconfigure it.
Pinging and testing to the same netalyzr server. The replies started
to drop during the downlink and uplink tests, except for fq_codel,
which remained relatively stable.
No AQM:
Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
Reply from 54.234.36.13: bytes=32 time=107ms TTL=37
Reply from 54.234.36.13: bytes=32 time=98ms TTL=37
Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
Reply from 54.234.36.13: bytes=32 time=129ms TTL=37
Request timed out.
Reply from 54.234.36.13: bytes=32 time=105ms TTL=37
Reply from 54.234.36.13: bytes=32 time=152ms TTL=37
Reply from 54.234.36.13: bytes=32 time=139ms TTL=37
Request timed out.
Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
Reply from 54.234.36.13: bytes=32 time=98ms TTL=37
Reply from 54.234.36.13: bytes=32 time=98ms TTL=37
Reply from 54.234.36.13: bytes=32 time=156ms TTL=37
Reply from 54.234.36.13: bytes=32 time=153ms TTL=37
Reply from 54.234.36.13: bytes=32 time=118ms TTL=37
Reply from 54.234.36.13: bytes=32 time=166ms TTL=37
Reply from 54.234.36.13: bytes=32 time=176ms TTL=37
Reply from 54.234.36.13: bytes=32 time=160ms TTL=37
Reply from 54.234.36.13: bytes=32 time=138ms TTL=37
Reply from 54.234.36.13: bytes=32 time=150ms TTL=37
Reply from 54.234.36.13: bytes=32 time=182ms TTL=37
pie:
Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
Reply from 54.234.36.13: bytes=32 time=93ms TTL=37
Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
Reply from 54.234.36.13: bytes=32 time=463ms TTL=37
Request timed out.
Reply from 54.234.36.13: bytes=32 time=128ms TTL=37
Request timed out.
Reply from 54.234.36.13: bytes=32 time=108ms TTL=37
Reply from 54.234.36.13: bytes=32 time=97ms TTL=37
Reply from 54.234.36.13: bytes=32 time=93ms TTL=37
Reply from 54.234.36.13: bytes=32 time=100ms TTL=37
Reply from 54.234.36.13: bytes=32 time=144ms TTL=37
Reply from 54.234.36.13: bytes=32 time=174ms TTL=37
Request timed out.
Reply from 54.234.36.13: bytes=32 time=128ms TTL=37
Reply from 54.234.36.13: bytes=32 time=123ms TTL=37
Reply from 54.234.36.13: bytes=32 time=97ms TTL=37
fq_codel:
Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
Reply from 54.234.36.13: bytes=32 time=97ms TTL=37
Reply from 54.234.36.13: bytes=32 time=90ms TTL=37
Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
Reply from 54.234.36.13: bytes=32 time=124ms TTL=37
Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
Reply from 54.234.36.13: bytes=32 time=93ms TTL=37
Reply from 54.234.36.13: bytes=32 time=117ms TTL=37
Reply from 54.234.36.13: bytes=32 time=99ms TTL=37
Reply from 54.234.36.13: bytes=32 time=100ms TTL=37
Reply from 54.234.36.13: bytes=32 time=99ms TTL=37
root@cerowrt:~# tc -s qdisc show dev ge00
qdisc htb 1: root refcnt 2 r2q 10 default 10 direct_packets_stat 0
Sent 10342827 bytes 50261 pkt (dropped 2158, overlimits 18307 requeues 0)
backlog 0b 0p requeues 0
qdisc fq_codel 110: parent 1:10 limit 600p flows 1024 quantum 300
target 5.0ms interval 100.0ms
Sent 10342827 bytes 50261 pkt (dropped 4928, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 1514 drop_overlimit 2165 new_flow_count 1568 ecn_mark 0
new_flows_len 0 old_flows_len 1
qdisc ingress ffff: parent ffff:fff1 ----------------
Sent 44097076 bytes 59361 pkt (dropped 81, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Thanks, I'll also look into rrul.
On Fri, Dec 20, 2013 at 9:16 PM, Dave Taht <dave.taht@gmail.com> wrote:
> Netanalyzr is inaccurate. It pushes out a udp stream for not long
> enough fpr codel to react, thus giving you an over-estimate, and
> furthermore doesn't detect the presence of flow queuing on the link by
> sending a secondary flow. This latter problem in netanalyzer is
> starting to bug me. They've known they don't detect SFQ, SQF, or
> fq_codel or drr for a long time now, these packet schedulers are
> deployed at the very least at FT and free.fr and probably quite a few
> places more, and detecting it is straightforward.
>
> Netanalyzr + a ping on the side is all that is needed to see
> difference between bloat, aqm, and packet scheduling.
>
> The rrul test is even better.
>
> I would be interested in your pie results on the link...
>
> netanalyzer + a ping -c 60 somewhere in both cases...
>
> however... there WAS a lot of churn in the AQM code these past few
> months, so it is possible you have a busted version of the aqm scripts
> as well. a sample of your
>
> tc -s qdisc show dev ge00
>
> would be helpful. As rich says, 3.10.24-5 is pretty good at this
> point, and a large number of people have installed it, with only a few
> problems (We have a kernel issue that rose it's ugly head again
> (instruction traps), and we are discussing improving the web interface
> further).
>
> So upgrade first.
>
>
>
> On Fri, Dec 20, 2013 at 9:01 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>>
>> On Dec 20, 2013, at 11:32 PM, Hector Ordorica <hechacker1@gmail.com> wrote:
>>
>>> I'm running 3.10.13-2 on a WNDR3800, and have used the suggested
>>> settings from the latest draft:
>>>
>>> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for_CeroWrt_310
>>>
>>> I have a 30Mb down / 5Mb upload cable connection.
>>>
>>> With fq_codel, even undershooting network upload bandwidth by more
>>> than 95%, I'm seeing 500ms excessive upload buffering warnings from
>>> netalyzr. Download is ok at 130ms. I was previously on a 3.8 release
>>> and the same was true.
>>
>> I have seen the same thing, although with different CeroWrt firmware. Netalyzr was reporting
>>> 500 msec buffering in both directions.
>>
>> However, I was simultaneously running a ping to Google during that Netalyzr run, and the
>> ping times started at ~55 msec before I started Netalyzr, and occasionally they would bump
>> up to 70 or 80 msec, but never the long times that Netzlyzr reported...
>>
>> I also reported this to the Netalyzr mailing list and they didn’t seem surprised. I’m not sure how to interpret this.
>>
>>> With pie (and default settings), the buffer warnings go away:
>>>
>>> http://n2.netalyzr.icsi.berkeley.edu/summary/id=43ca208a-32182-9424fd6e-5c5f-42d7-a9ea
>>>
>>> And the connection performs very well while torrenting and gaming.
>>>
>>> Should I try new code? Or can I tweak some variables and/or delay
>>> options in scripts for codel?
>>
>> A couple thoughts:
>>
>> - There have been a bunch of changes between 3.10.13-2 and the current version (3.10.24-5, which seems pretty stable). You might try upgrading. (See the “Rough Notes” at the bottom of http://www.bufferbloat.net/projects/cerowrt/wiki/CeroWrt_310_Release_Notes for the progression of changes).
>>
>> - Have you tried a more aggressive decrease to the link speeds on the AQM page (say, 85% instead of 95%)?
>>
>> - Can we get more corroboration from the list about the behavior of Netalyzer?
>>
>> Rich
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
>
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Cerowrt-devel] Proper AQM settings for my connection?
2013-12-21 5:38 ` Hector Ordorica
@ 2013-12-21 5:47 ` Hector Ordorica
0 siblings, 0 replies; 7+ messages in thread
From: Hector Ordorica @ 2013-12-21 5:47 UTC (permalink / raw)
To: cerowrt-devel
And the pie tc if you are interested:
root@cerowrt:~# tc -s qdisc show dev ge00
qdisc htb 1: root refcnt 2 r2q 10 default 10 direct_packets_stat 0
Sent 9106696 bytes 50492 pkt (dropped 5317, overlimits 17208 requeues 0)
backlog 0b 0p requeues 0
qdisc pie 110: parent 1:10 limit 600p target 19 tupdate 27 alpha 2
beta 20 bytemode 0 ecn
Sent 9106696 bytes 50492 pkt (dropped 5317, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
prob 0.000000 delay 0 avg_dq_rate 0 ecn_mark 0
qdisc ingress ffff: parent ffff:fff1 ----------------
Sent 44939413 bytes 59735 pkt (dropped 2811, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
On Fri, Dec 20, 2013 at 9:38 PM, Hector Ordorica <hechacker1@gmail.com> wrote:
> Interesting, I'll upgrade as soon as I have the chance to reconfigure it.
>
> Pinging and testing to the same netalyzr server. The replies started
> to drop during the downlink and uplink tests, except for fq_codel,
> which remained relatively stable.
>
> No AQM:
>
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=107ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=98ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=129ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=105ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=152ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=139ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=98ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=98ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=156ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=153ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=118ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=166ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=176ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=160ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=138ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=150ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=182ms TTL=37
>
> pie:
>
> Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=93ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=463ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=128ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=108ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=97ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=93ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=100ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=144ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=174ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=128ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=123ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=97ms TTL=37
>
> fq_codel:
>
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=97ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=90ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=124ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=93ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=117ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=99ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=100ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=99ms TTL=37
>
>
> root@cerowrt:~# tc -s qdisc show dev ge00
> qdisc htb 1: root refcnt 2 r2q 10 default 10 direct_packets_stat 0
> Sent 10342827 bytes 50261 pkt (dropped 2158, overlimits 18307 requeues 0)
> backlog 0b 0p requeues 0
> qdisc fq_codel 110: parent 1:10 limit 600p flows 1024 quantum 300
> target 5.0ms interval 100.0ms
> Sent 10342827 bytes 50261 pkt (dropped 4928, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> maxpacket 1514 drop_overlimit 2165 new_flow_count 1568 ecn_mark 0
> new_flows_len 0 old_flows_len 1
> qdisc ingress ffff: parent ffff:fff1 ----------------
> Sent 44097076 bytes 59361 pkt (dropped 81, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
>
> Thanks, I'll also look into rrul.
>
> On Fri, Dec 20, 2013 at 9:16 PM, Dave Taht <dave.taht@gmail.com> wrote:
>> Netanalyzr is inaccurate. It pushes out a udp stream for not long
>> enough fpr codel to react, thus giving you an over-estimate, and
>> furthermore doesn't detect the presence of flow queuing on the link by
>> sending a secondary flow. This latter problem in netanalyzer is
>> starting to bug me. They've known they don't detect SFQ, SQF, or
>> fq_codel or drr for a long time now, these packet schedulers are
>> deployed at the very least at FT and free.fr and probably quite a few
>> places more, and detecting it is straightforward.
>>
>> Netanalyzr + a ping on the side is all that is needed to see
>> difference between bloat, aqm, and packet scheduling.
>>
>> The rrul test is even better.
>>
>> I would be interested in your pie results on the link...
>>
>> netanalyzer + a ping -c 60 somewhere in both cases...
>>
>> however... there WAS a lot of churn in the AQM code these past few
>> months, so it is possible you have a busted version of the aqm scripts
>> as well. a sample of your
>>
>> tc -s qdisc show dev ge00
>>
>> would be helpful. As rich says, 3.10.24-5 is pretty good at this
>> point, and a large number of people have installed it, with only a few
>> problems (We have a kernel issue that rose it's ugly head again
>> (instruction traps), and we are discussing improving the web interface
>> further).
>>
>> So upgrade first.
>>
>>
>>
>> On Fri, Dec 20, 2013 at 9:01 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>>>
>>> On Dec 20, 2013, at 11:32 PM, Hector Ordorica <hechacker1@gmail.com> wrote:
>>>
>>>> I'm running 3.10.13-2 on a WNDR3800, and have used the suggested
>>>> settings from the latest draft:
>>>>
>>>> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for_CeroWrt_310
>>>>
>>>> I have a 30Mb down / 5Mb upload cable connection.
>>>>
>>>> With fq_codel, even undershooting network upload bandwidth by more
>>>> than 95%, I'm seeing 500ms excessive upload buffering warnings from
>>>> netalyzr. Download is ok at 130ms. I was previously on a 3.8 release
>>>> and the same was true.
>>>
>>> I have seen the same thing, although with different CeroWrt firmware. Netalyzr was reporting
>>>> 500 msec buffering in both directions.
>>>
>>> However, I was simultaneously running a ping to Google during that Netalyzr run, and the
>>> ping times started at ~55 msec before I started Netalyzr, and occasionally they would bump
>>> up to 70 or 80 msec, but never the long times that Netzlyzr reported...
>>>
>>> I also reported this to the Netalyzr mailing list and they didn’t seem surprised. I’m not sure how to interpret this.
>>>
>>>> With pie (and default settings), the buffer warnings go away:
>>>>
>>>> http://n2.netalyzr.icsi.berkeley.edu/summary/id=43ca208a-32182-9424fd6e-5c5f-42d7-a9ea
>>>>
>>>> And the connection performs very well while torrenting and gaming.
>>>>
>>>> Should I try new code? Or can I tweak some variables and/or delay
>>>> options in scripts for codel?
>>>
>>> A couple thoughts:
>>>
>>> - There have been a bunch of changes between 3.10.13-2 and the current version (3.10.24-5, which seems pretty stable). You might try upgrading. (See the “Rough Notes” at the bottom of http://www.bufferbloat.net/projects/cerowrt/wiki/CeroWrt_310_Release_Notes for the progression of changes).
>>>
>>> - Have you tried a more aggressive decrease to the link speeds on the AQM page (say, 85% instead of 95%)?
>>>
>>> - Can we get more corroboration from the list about the behavior of Netalyzer?
>>>
>>> Rich
>>> _______________________________________________
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>>
>>
>> --
>> Dave Täht
>>
>> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Cerowrt-devel] Proper AQM settings for my connection?
2013-12-21 4:32 [Cerowrt-devel] Proper AQM settings for my connection? Hector Ordorica
2013-12-21 5:01 ` Rich Brown
@ 2013-12-21 10:32 ` Sebastian Moeller
1 sibling, 0 replies; 7+ messages in thread
From: Sebastian Moeller @ 2013-12-21 10:32 UTC (permalink / raw)
To: Hector Ordorica, cerowrt-devel
Hector Ordorica <hechacker1@gmail.com> wrote:
>I'm running 3.10.13-2 on a WNDR3800, and have used the suggested
>settings from the latest draft:
>
>http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for_CeroWrt_310
>
>I have a 30Mb down / 5Mb upload cable connection.
>
>With fq_codel, even undershooting network upload bandwidth by more
>than 95%, I'm seeing 500ms excessive upload buffering warnings from
>netalyzr. Download is ok at 130ms. I was previously on a 3.8 release
>and the same was true.
So I have been fooled by netalyzr before just as you now. Netalyzr uses a very peculiar probe to measure the depth of the buffers: a totally nonreactive inelastic "flood" of UDP packets of relative short duration. The only real world traffic that looks like this is a denial of service attack on your router. Fq_codel tries very hard to be a good citizen that steers flows gently to their fair share of the bandwidth, in case flows do not react fq_codel will slowly take the gloves of so to say and restrict these flows more aggressively. The netalyzr probe now is too short for fq_codel to actually get serious in its packet dropping. Now real traffic typically, be it TCP or UDP tries to adjust to dropped packets by reducing the transmission rate. In other words netalyzt measures a sort of worst case buffering for fq_codel. Note for pfifo_fast this worst case is actually something you encounter with real traffic as well. So what netalyzr is missing is a report telling you whether the reported buffering will increase the overall latency of the system, or not....
To summarize unless you see UDP floods as a typical use case for your internet connection, the netalyzr buffering numbers have no great significance for day to day use of your internet connection, if your are using a modern qdisc like fq_codel or pie.
As Dave taught me in the past, you can easily test this hypothesis by modifying the limit parameter of fq_codel in simple,
.qos or simplest.qos. The larger limit and the slower the link speed in the measured direction the greater the reported buffering.
>
>With pie (and default settings), the buffer warnings go away:
>
>http://n2.netalyzr.icsi.berkeley.edu/summary/id=43ca208a-32182-9424fd6e-5c5f-42d7-a9ea
>
>And the connection performs very well while torrenting and gaming.
>
>Should I try new code? Or can I tweak some variables and/or delay
>options in scripts for codel?
>
>Thanks for your work,
>Hector
>_______________________________________________
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel
Hi Hector,
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [Cerowrt-devel] Proper AQM settings for my connection?
2013-12-21 5:01 ` Rich Brown
2013-12-21 5:16 ` Dave Taht
@ 2013-12-21 10:40 ` Sebastian Moeller
1 sibling, 0 replies; 7+ messages in thread
From: Sebastian Moeller @ 2013-12-21 10:40 UTC (permalink / raw)
To: Rich Brown, Hector Ordorica; +Cc: cerowrt-devel
[-- Attachment #1: Type: text/plain, Size: 2862 bytes --]
Rich Brown <richb.hanover@gmail.com> wrote:
>
>On Dec 20, 2013, at 11:32 PM, Hector Ordorica <hechacker1@gmail.com>
>wrote:
>
>> I'm running 3.10.13-2 on a WNDR3800, and have used the suggested
>> settings from the latest draft:
>>
>>
>http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for_CeroWrt_310
>>
>> I have a 30Mb down / 5Mb upload cable connection.
>>
>> With fq_codel, even undershooting network upload bandwidth by more
>> than 95%, I'm seeing 500ms excessive upload buffering warnings from
>> netalyzr. Download is ok at 130ms. I was previously on a 3.8 release
>> and the same was true.
>
>I have seen the same thing, although with different CeroWrt firmware.
>Netalyzr was reporting
>> 500 msec buffering in both directions.
>
>However, I was simultaneously running a ping to Google during that
>Netalyzr run, and the
>ping times started at ~55 msec before I started Netalyzr, and
>occasionally they would bump
>up to 70 or 80 msec, but never the long times that Netzlyzr reported...
>
>I also reported this to the Netalyzr mailing list and they didn’t seem
>surprised. I’m not sure how to interpret this.
>
>> With pie (and default settings), the buffer warnings go away:
>>
>>
>http://n2.netalyzr.icsi.berkeley.edu/summary/id=43ca208a-32182-9424fd6e-5c5f-42d7-a9ea
>>
>> And the connection performs very well while torrenting and gaming.
>>
>> Should I try new code? Or can I tweak some variables and/or delay
>> options in scripts for codel?
>
>A couple thoughts:
>
>- There have been a bunch of changes between 3.10.13-2 and the current
>version (3.10.24-5, which seems pretty stable). You might try
>upgrading. (See the “Rough Notes” at the bottom of
>http://www.bufferbloat.net/projects/cerowrt/wiki/CeroWrt_310_Release_Notes
>for the progression of changes).
>
>- Have you tried a more aggressive decrease to the link speeds on the
>AQM page (say, 85% instead of 95%)?
This will not affect the report by netalyzr by much, and it will most likely increase the reported buffering. Netalyzr fills fq_codels buffer and finishes before fq_codel get serious dropping packets to get control over the unruly netalyzr flow.
>
>- Can we get more corroboration from the list about the behavior of
>Netalyzer?
Yes, several people have stumbled over this issue in the past, probably indicating we should write a FAQ or wiki page about the matter to avoid this being rediscovered again and again. I really like Dave's proposed concurrent ping test...
Best Regards
Sebastian
>
>Rich
>_______________________________________________
>Cerowrt-devel mailing list
>Cerowrt-devel@lists.bufferbloat.net
>https://lists.bufferbloat.net/listinfo/cerowrt-devel
Hi Rich,
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
[-- Attachment #2: Type: text/html, Size: 3388 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2013-12-21 10:40 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-12-21 4:32 [Cerowrt-devel] Proper AQM settings for my connection? Hector Ordorica
2013-12-21 5:01 ` Rich Brown
2013-12-21 5:16 ` Dave Taht
2013-12-21 5:38 ` Hector Ordorica
2013-12-21 5:47 ` Hector Ordorica
2013-12-21 10:40 ` Sebastian Moeller
2013-12-21 10:32 ` Sebastian Moeller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox