From: Hector Ordorica <hechacker1@gmail.com>
To: cerowrt-devel@lists.bufferbloat.net
Subject: Re: [Cerowrt-devel] Proper AQM settings for my connection?
Date: Fri, 20 Dec 2013 21:47:20 -0800 [thread overview]
Message-ID: <CAMNY_1V3h+eeZsiOmnAYZEzk6cLt9H-6t4qDyRLCq1V21JkcGw@mail.gmail.com> (raw)
In-Reply-To: <CAMNY_1VJs3yFqTu9em9g50E8fVu2-RXJ7eRsvXukj7RvBK5Ggw@mail.gmail.com>
And the pie tc if you are interested:
root@cerowrt:~# tc -s qdisc show dev ge00
qdisc htb 1: root refcnt 2 r2q 10 default 10 direct_packets_stat 0
Sent 9106696 bytes 50492 pkt (dropped 5317, overlimits 17208 requeues 0)
backlog 0b 0p requeues 0
qdisc pie 110: parent 1:10 limit 600p target 19 tupdate 27 alpha 2
beta 20 bytemode 0 ecn
Sent 9106696 bytes 50492 pkt (dropped 5317, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
prob 0.000000 delay 0 avg_dq_rate 0 ecn_mark 0
qdisc ingress ffff: parent ffff:fff1 ----------------
Sent 44939413 bytes 59735 pkt (dropped 2811, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
On Fri, Dec 20, 2013 at 9:38 PM, Hector Ordorica <hechacker1@gmail.com> wrote:
> Interesting, I'll upgrade as soon as I have the chance to reconfigure it.
>
> Pinging and testing to the same netalyzr server. The replies started
> to drop during the downlink and uplink tests, except for fq_codel,
> which remained relatively stable.
>
> No AQM:
>
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=107ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=98ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=129ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=105ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=152ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=139ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=98ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=98ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=156ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=153ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=118ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=166ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=176ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=160ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=138ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=150ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=182ms TTL=37
>
> pie:
>
> Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=93ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=463ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=128ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=108ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=97ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=93ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=100ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=144ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=174ms TTL=37
> Request timed out.
> Reply from 54.234.36.13: bytes=32 time=128ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=123ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=97ms TTL=37
>
> fq_codel:
>
> Reply from 54.234.36.13: bytes=32 time=96ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=97ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=90ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=95ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=124ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=94ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=93ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=117ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=99ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=100ms TTL=37
> Reply from 54.234.36.13: bytes=32 time=99ms TTL=37
>
>
> root@cerowrt:~# tc -s qdisc show dev ge00
> qdisc htb 1: root refcnt 2 r2q 10 default 10 direct_packets_stat 0
> Sent 10342827 bytes 50261 pkt (dropped 2158, overlimits 18307 requeues 0)
> backlog 0b 0p requeues 0
> qdisc fq_codel 110: parent 1:10 limit 600p flows 1024 quantum 300
> target 5.0ms interval 100.0ms
> Sent 10342827 bytes 50261 pkt (dropped 4928, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> maxpacket 1514 drop_overlimit 2165 new_flow_count 1568 ecn_mark 0
> new_flows_len 0 old_flows_len 1
> qdisc ingress ffff: parent ffff:fff1 ----------------
> Sent 44097076 bytes 59361 pkt (dropped 81, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
>
> Thanks, I'll also look into rrul.
>
> On Fri, Dec 20, 2013 at 9:16 PM, Dave Taht <dave.taht@gmail.com> wrote:
>> Netanalyzr is inaccurate. It pushes out a udp stream for not long
>> enough fpr codel to react, thus giving you an over-estimate, and
>> furthermore doesn't detect the presence of flow queuing on the link by
>> sending a secondary flow. This latter problem in netanalyzer is
>> starting to bug me. They've known they don't detect SFQ, SQF, or
>> fq_codel or drr for a long time now, these packet schedulers are
>> deployed at the very least at FT and free.fr and probably quite a few
>> places more, and detecting it is straightforward.
>>
>> Netanalyzr + a ping on the side is all that is needed to see
>> difference between bloat, aqm, and packet scheduling.
>>
>> The rrul test is even better.
>>
>> I would be interested in your pie results on the link...
>>
>> netanalyzer + a ping -c 60 somewhere in both cases...
>>
>> however... there WAS a lot of churn in the AQM code these past few
>> months, so it is possible you have a busted version of the aqm scripts
>> as well. a sample of your
>>
>> tc -s qdisc show dev ge00
>>
>> would be helpful. As rich says, 3.10.24-5 is pretty good at this
>> point, and a large number of people have installed it, with only a few
>> problems (We have a kernel issue that rose it's ugly head again
>> (instruction traps), and we are discussing improving the web interface
>> further).
>>
>> So upgrade first.
>>
>>
>>
>> On Fri, Dec 20, 2013 at 9:01 PM, Rich Brown <richb.hanover@gmail.com> wrote:
>>>
>>> On Dec 20, 2013, at 11:32 PM, Hector Ordorica <hechacker1@gmail.com> wrote:
>>>
>>>> I'm running 3.10.13-2 on a WNDR3800, and have used the suggested
>>>> settings from the latest draft:
>>>>
>>>> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_AQM_for_CeroWrt_310
>>>>
>>>> I have a 30Mb down / 5Mb upload cable connection.
>>>>
>>>> With fq_codel, even undershooting network upload bandwidth by more
>>>> than 95%, I'm seeing 500ms excessive upload buffering warnings from
>>>> netalyzr. Download is ok at 130ms. I was previously on a 3.8 release
>>>> and the same was true.
>>>
>>> I have seen the same thing, although with different CeroWrt firmware. Netalyzr was reporting
>>>> 500 msec buffering in both directions.
>>>
>>> However, I was simultaneously running a ping to Google during that Netalyzr run, and the
>>> ping times started at ~55 msec before I started Netalyzr, and occasionally they would bump
>>> up to 70 or 80 msec, but never the long times that Netzlyzr reported...
>>>
>>> I also reported this to the Netalyzr mailing list and they didn’t seem surprised. I’m not sure how to interpret this.
>>>
>>>> With pie (and default settings), the buffer warnings go away:
>>>>
>>>> http://n2.netalyzr.icsi.berkeley.edu/summary/id=43ca208a-32182-9424fd6e-5c5f-42d7-a9ea
>>>>
>>>> And the connection performs very well while torrenting and gaming.
>>>>
>>>> Should I try new code? Or can I tweak some variables and/or delay
>>>> options in scripts for codel?
>>>
>>> A couple thoughts:
>>>
>>> - There have been a bunch of changes between 3.10.13-2 and the current version (3.10.24-5, which seems pretty stable). You might try upgrading. (See the “Rough Notes” at the bottom of http://www.bufferbloat.net/projects/cerowrt/wiki/CeroWrt_310_Release_Notes for the progression of changes).
>>>
>>> - Have you tried a more aggressive decrease to the link speeds on the AQM page (say, 85% instead of 95%)?
>>>
>>> - Can we get more corroboration from the list about the behavior of Netalyzer?
>>>
>>> Rich
>>> _______________________________________________
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>>
>>
>> --
>> Dave Täht
>>
>> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
next prev parent reply other threads:[~2013-12-21 5:47 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-21 4:32 Hector Ordorica
2013-12-21 5:01 ` Rich Brown
2013-12-21 5:16 ` Dave Taht
2013-12-21 5:38 ` Hector Ordorica
2013-12-21 5:47 ` Hector Ordorica [this message]
2013-12-21 10:40 ` Sebastian Moeller
2013-12-21 10:32 ` Sebastian Moeller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAMNY_1V3h+eeZsiOmnAYZEzk6cLt9H-6t4qDyRLCq1V21JkcGw@mail.gmail.com \
--to=hechacker1@gmail.com \
--cc=cerowrt-devel@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox