* [Cerowrt-devel] uplink_buffer_adjustment
@ 2014-02-25 11:14 Oliver Niesner
2014-02-25 12:08 ` Sebastian Moeller
2014-02-25 15:59 ` Jim Gettys
0 siblings, 2 replies; 8+ messages in thread
From: Oliver Niesner @ 2014-02-25 11:14 UTC (permalink / raw)
To: cerowrt-devel
Hi list,
I use cerowrt (3.10.24-8) direct behind my main dsl Router.
SQM is set and performance is good.
When i used Netalyzr from my smartphone i've got good results.
> Network buffer measurements (?): Uplink 96 ms, Downlink is good
But when i use my notebook i get this:
> Network buffer measurements (?): Uplink 1200 ms, Downlink is good
I tried even wired connection and set ring buffer rx/tx with ethtool to 64, but
only minimal change in uplink buffer (1100ms).
Has anyone an idea, what i can try to get better uplink performance?
Regards,
Oliver
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cerowrt-devel] uplink_buffer_adjustment
2014-02-25 11:14 [Cerowrt-devel] uplink_buffer_adjustment Oliver Niesner
@ 2014-02-25 12:08 ` Sebastian Moeller
2014-02-25 14:05 ` Maciej Soltysiak
2014-02-25 15:59 ` Jim Gettys
1 sibling, 1 reply; 8+ messages in thread
From: Sebastian Moeller @ 2014-02-25 12:08 UTC (permalink / raw)
To: Oliver Niesner; +Cc: cerowrt-devel
Hi Oliver,
On Feb 25, 2014, at 12:14 , Oliver Niesner <oliver.niesner@gmail.com> wrote:
> Hi list,
>
> I use cerowrt (3.10.24-8) direct behind my main dsl Router.
> SQM is set and performance is good.
This is the most important part; you seem happy with the actual latency under load.
> When i used Netalyzr from my smartphone i've got good results.
>
>> Network buffer measurements (?): Uplink 96 ms, Downlink is good
Presumably the smart phone connects to the cerowrt router via wlan?
>
> But when i use my notebook i get this:
>
>> Network buffer measurements (?): Uplink 1200 ms, Downlink is good
If you want to change that number neatly reports you will need to change the limit variable of the fq_codel instance(s) on your uplink device (ge00), smaller limit values will cause smaller reported buffering.
BUT you should not be concerned about this report at all. Netalyzr tries to fill the buffers with (relative) high bandwidth inelastic unrelenting UDP traffic, one could say it represents a DOS attack better than real traffic. Real traffic be it TCP (by virtue of the protocol) and even UDP (by virtue of application writers implementing their own congestion control) will react to packet loss by reducing the transmit speed. SQM does a good job strategically dropping packets so that all flows can adapt their bandwidth to the available total bandwidth. The beauty of codel is that it starts out quite gentle in a way that is well matched to TCPs congestion avoidance. It will turn less gentle if the traffic does not respond and congestion stays high. Netalyzr now is this weird combination of short duration yet unrelenting traffic. The reason why SQM does not affect the netalyzr probe by much is that the netalyzr traffic stops before fq_codel has ramped up the dropping. So what happens is that the probe fills the largest buffer which is specified by the limit parameter of fq_codel, that defaults to 1000 packets on cerowrt (once the queue has limit packets all new packets get dropped, as it resorts to tail dropping, but that really is just an emergency behavior; under normal loads with a slow link the queue will never come close to 1000).
TL:DR version, netalyzr probes a buffer depth that while existing has not much relevance on the latency behavior under load for cerowrt.
>>
>
> I tried even wired connection and set ring buffer rx/tx with ethtool to 64, but
> only minimal change in uplink buffer (1100ms).
Good idea but wrong tunable ;)
>
> Has anyone an idea, what i can try to get better uplink performance?
Unless you intend to run yur router under continuous DOS attacks, I would recommend to ignore netalyzr's buffer measurements (as they do not reflect cerowrt's behavior under a more realistic load well).
Best Regards
Sebastian
>
> Regards,
>
> Oliver
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cerowrt-devel] uplink_buffer_adjustment
2014-02-25 12:08 ` Sebastian Moeller
@ 2014-02-25 14:05 ` Maciej Soltysiak
2014-02-25 14:19 ` Sebastian Moeller
0 siblings, 1 reply; 8+ messages in thread
From: Maciej Soltysiak @ 2014-02-25 14:05 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: cerowrt-devel
On Tue, Feb 25, 2014 at 1:08 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
> TL:DR version, netalyzr probes a buffer depth that while existing has not much relevance on the latency behavior under load for cerowrt.
Yes, it doesn't have much relevance for latency where fq_codel or
similar are employed.
But where it's not (or where fq_codel is defeated by DOS style
traffic), it seems this comes back as a meaningful metric. In the end
codel is workaround for overbuffering, which is not being removed very
hastily.
Best regards,
Maciej
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cerowrt-devel] uplink_buffer_adjustment
2014-02-25 14:05 ` Maciej Soltysiak
@ 2014-02-25 14:19 ` Sebastian Moeller
0 siblings, 0 replies; 8+ messages in thread
From: Sebastian Moeller @ 2014-02-25 14:19 UTC (permalink / raw)
To: Maciej Soltysiak; +Cc: cerowrt-devel
Hi Maciej,
On Feb 25, 2014, at 15:05 , Maciej Soltysiak <maciej@soltysiak.com> wrote:
> On Tue, Feb 25, 2014 at 1:08 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>> TL:DR version, netalyzr probes a buffer depth that while existing has not much relevance on the latency behavior under load for cerowrt.
> Yes, it doesn't have much relevance for latency where fq_codel or
> similar are employed.
Exactly.
> But where it's not (or where fq_codel is defeated by DOS style
> traffic), it seems this comes back as a meaningful metric.
That is what I thought as well initially,. But then (I think Eric D explained) if the system is under a DOS attack we have more severe problems at hand than ping latency ;) Think about it that way, if the flood saturates our uplink, fq_codel will sooner or later reign it in anyway, and we only see a temporary glitch in latency performance (during the time the queue stays in tail-drop or so). If the DOS attack is coming from the outside against our system we are hosed anyway, as a the upstream CMTS/DSLAM?BRAS buffers will fill up over which we have no control (as typically the ingress of a DSLAM is much greater than the egress to a specific line so there is going to be an normally temporary queue). I think during a DOS all hops with ingress-from-DOS > egress-to-destination will see buffers filing up...
So my current thinking is it is nice to have some buffering against large swings in available bandwidth to smooth out the traffic a bit.
On the topic of the limit parameter I want to add, that I see the most relevant part of limit to keep cerowrt out of out-of-memory conditions which will cause the unit to reboot. So ideally the total amount of buffering in the system stays low enough so the box does not go belly-up.
> In the end
> codel is workaround for overbuffering, which is not being removed very
> hastily.
I beg to differ, once the buffer management gets smart buffers turn from liability to a boon ;)
Best Regards
Sebastian
>
> Best regards,
> Maciej
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cerowrt-devel] uplink_buffer_adjustment
2014-02-25 11:14 [Cerowrt-devel] uplink_buffer_adjustment Oliver Niesner
2014-02-25 12:08 ` Sebastian Moeller
@ 2014-02-25 15:59 ` Jim Gettys
[not found] ` <D82FF87B-F220-45B4-AA87-27037DB472C9@icsi.berkeley.edu>
2014-02-25 17:10 ` Dave Taht
1 sibling, 2 replies; 8+ messages in thread
From: Jim Gettys @ 2014-02-25 15:59 UTC (permalink / raw)
To: Oliver Niesner; +Cc: Nicholas Weaver, cerowrt-devel
[-- Attachment #1: Type: text/plain, Size: 1918 bytes --]
On Tue, Feb 25, 2014 at 6:14 AM, Oliver Niesner <oliver.niesner@gmail.com>wrote:
> Hi list,
>
> I use cerowrt (3.10.24-8) direct behind my main dsl Router.
> SQM is set and performance is good.
> When i used Netalyzr from my smartphone i've got good results.
>
> > Network buffer measurements (?): Uplink 96 ms, Downlink is good
>
> But when i use my notebook i get this:
>
> > Network buffer measurements (?): Uplink 1200 ms, Downlink is good
>
> I tried even wired connection and set ring buffer rx/tx with ethtool to
> 64, but
> only minimal change in uplink buffer (1100ms).
>
> Has anyone an idea, what i can try to get better uplink performance?
>
Netalyzr uses a UDP based test for "filling the buffers"; it is not
responsive to drops/marks at all, the way a TCP test would be.
So if you run it, the flows it generates are unresponsive, and indicate the
true size of the buffers at the bottleneck link, even though any normal TCP
would long since have responded and nothing like that amount of buffering
would have taken place. Furthermore, the flow queuing of fq_codel isolates
those flows from other flows, and therefore you do not get the bad latency
you would otherwise get on those flows.
In short, (particularly since fq_codel is deployed in quantity millions by
a few ISP's already; it is no longer a fluke to find it only in hacker's
hands), Nick Weaver needs to improve netalyzr to detect flow queuing
algorithms and make some sense out of the situation. It would be great to
monitor the spread of these algorithms around the Internet over the coming
years.
So it is arguably a "bug" in netalyzr. It is certainly extremely
misleading.
Nick?
- Jim
> Regards,
>
> Oliver
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
[-- Attachment #2: Type: text/html, Size: 3379 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cerowrt-devel] uplink_buffer_adjustment
[not found] ` <D82FF87B-F220-45B4-AA87-27037DB472C9@icsi.berkeley.edu>
@ 2014-02-25 16:46 ` Jim Gettys
2014-02-25 21:27 ` dpreed
0 siblings, 1 reply; 8+ messages in thread
From: Jim Gettys @ 2014-02-25 16:46 UTC (permalink / raw)
To: Nicholas Weaver; +Cc: cerowrt-devel
[-- Attachment #1: Type: text/plain, Size: 1013 bytes --]
On Tue, Feb 25, 2014 at 11:02 AM, Nicholas Weaver <nweaver@icsi.berkeley.edu
> wrote:
>
> On Feb 25, 2014, at 7:59 AM, Jim Gettys <jg@freedesktop.org> wrote:
> > So it is arguably a "bug" in netalyzr. It is certainly extremely
> misleading.
> >
> > Nick?
>
> Rewriting it as a TCP-based stresser is definatly on our to-do list.
>
Good; though I'm not sure you'll be able to build a TCP one that fills the
buffers fast enough to determine some of the buffering out there (at least
without hacking the TCP implementation, anyway).
The other piece of this is detecting flow queuing being active; this makes
a bigger difference to actual latency than mark/drop algorithms do by
themselves.
- Jim
>
>
> --
> Nicholas Weaver it is a tale, told by an idiot,
> nweaver@icsi.berkeley.edu full of sound and fury,
> 510-666-2903 .signifying nothing
> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc
>
>
[-- Attachment #2: Type: text/html, Size: 2079 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cerowrt-devel] uplink_buffer_adjustment
2014-02-25 15:59 ` Jim Gettys
[not found] ` <D82FF87B-F220-45B4-AA87-27037DB472C9@icsi.berkeley.edu>
@ 2014-02-25 17:10 ` Dave Taht
1 sibling, 0 replies; 8+ messages in thread
From: Dave Taht @ 2014-02-25 17:10 UTC (permalink / raw)
To: Jim Gettys; +Cc: Nicholas Weaver, cerowrt-devel
On Tue, Feb 25, 2014 at 7:59 AM, Jim Gettys <jg@freedesktop.org> wrote:
>
>
>
> On Tue, Feb 25, 2014 at 6:14 AM, Oliver Niesner <oliver.niesner@gmail.com>
> wrote:
>>
>> Hi list,
>>
>> I use cerowrt (3.10.24-8) direct behind my main dsl Router.
>> SQM is set and performance is good.
>> When i used Netalyzr from my smartphone i've got good results.
>>
>> > Network buffer measurements (?): Uplink 96 ms, Downlink is good
>>
>> But when i use my notebook i get this:
>>
>> > Network buffer measurements (?): Uplink 1200 ms, Downlink is good
>>
>> I tried even wired connection and set ring buffer rx/tx with ethtool to
>> 64, but
>> only minimal change in uplink buffer (1100ms).
>>
>> Has anyone an idea, what i can try to get better uplink performance?
>
>
> Netalyzr uses a UDP based test for "filling the buffers"; it is not
> responsive to drops/marks at all, the way a TCP test would be.
>
> So if you run it, the flows it generates are unresponsive, and indicate the
> true size of the buffers at the bottleneck link, even though any normal TCP
> would long since have responded and nothing like that amount of buffering
> would have taken place. Furthermore, the flow queuing of fq_codel isolates
> those flows from other flows, and therefore you do not get the bad latency
> you would otherwise get on those flows.
>
> In short, (particularly since fq_codel is deployed in quantity millions by a
> few ISP's already; it is no longer a fluke to find it only in hacker's
> hands), Nick Weaver needs to improve netalyzr to detect flow queuing
> algorithms and make some sense out of the situation. It would be great to
> monitor the spread of these algorithms around the Internet over the coming
> years.
>
> So it is arguably a "bug" in netalyzr. It is certainly extremely
> misleading.
A lot of evidence has accumulated that SFQ and SQF is widely deployed
by more than a few DSL providers, notably several in France have come
forward. There is some evidence it is in some firmwares, too.
SFQ is the default on free.fr's old boxes, and fq_codel was deployed
on their revolution V6 product august 2012 (an upgrade to linux 3.11
with better hashing support for ipv6 and 6rd is on its way).
They were kind enough to document their QoS/AQM/Scheduling system for
me (and point us at problems in setting the fq_codel target too low on
low bandwidth links which we've now fixed in cerowrt's sqm system (but
is not yet fixed in openwrt's qos-scripts or dd-wrts, sigh)), which I
have begun to encapsulate in this still very drafty internet draft:
http://snapon.lab.bufferbloat.net/~d/draft-taht-home-gateway-best-practices-00.html
So the naive single udp flow packet flooding test in netanalyzer in
speedtest and in netalyzer (and most other tools) gives misleading
results as to the effects of and size of bufferbloat in whole
countries, and I REALLY wish we could get good data on the total size
of the deployment of these technologies worldwide.
Measuring using a high rate flow, simultaneously with a low rate flow
on a different 5 tuple, will show more truth. Preferably
bidirectionally, at the same time....
> Nick?
>
> - Jim
>
>>
>> Regards,
>>
>> Oliver
>>
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
>
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
--
Dave Täht
Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Cerowrt-devel] uplink_buffer_adjustment
2014-02-25 16:46 ` Jim Gettys
@ 2014-02-25 21:27 ` dpreed
0 siblings, 0 replies; 8+ messages in thread
From: dpreed @ 2014-02-25 21:27 UTC (permalink / raw)
To: Jim Gettys; +Cc: Nicholas Weaver, cerowrt-devel
I've measured buffer size with TCP, when there is no fq_codel or whatever doing drops. After all, this is what caused me to get concerned.
And actually, since UDP packets are dropped by fq_codel the same as TCP packets, it's easy to see how big fq_codel lets the buffers get.
If the buffer gets to be 1200 msec. long with UDP, that's a problem with fq_codel - just think about it. Someone's tuning fq_codel to allow excess buildup of queueing, if that's observed.
So I doubt this is a netalyzr bug at all. Operator error more likely, in tuning fq_codel.
On Tuesday, February 25, 2014 11:46am, "Jim Gettys" <jg@freedesktop.org> said:
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
> On Tue, Feb 25, 2014 at 11:02 AM, Nicholas Weaver <nweaver@icsi.berkeley.edu
>> wrote:
>
>>
>> On Feb 25, 2014, at 7:59 AM, Jim Gettys <jg@freedesktop.org> wrote:
>> > So it is arguably a "bug" in netalyzr. It is certainly extremely
>> misleading.
>> >
>> > Nick?
>>
>> Rewriting it as a TCP-based stresser is definatly on our to-do list.
>>
>
> Good; though I'm not sure you'll be able to build a TCP one that fills the
> buffers fast enough to determine some of the buffering out there (at least
> without hacking the TCP implementation, anyway).
>
> The other piece of this is detecting flow queuing being active; this makes
> a bigger difference to actual latency than mark/drop algorithms do by
> themselves.
> - Jim
>
>
>>
>>
>> --
>> Nicholas Weaver it is a tale, told by an idiot,
>> nweaver@icsi.berkeley.edu full of sound and fury,
>> 510-666-2903 .signifying nothing
>> PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc
>>
>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2014-02-25 21:27 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-25 11:14 [Cerowrt-devel] uplink_buffer_adjustment Oliver Niesner
2014-02-25 12:08 ` Sebastian Moeller
2014-02-25 14:05 ` Maciej Soltysiak
2014-02-25 14:19 ` Sebastian Moeller
2014-02-25 15:59 ` Jim Gettys
[not found] ` <D82FF87B-F220-45B4-AA87-27037DB472C9@icsi.berkeley.edu>
2014-02-25 16:46 ` Jim Gettys
2014-02-25 21:27 ` dpreed
2014-02-25 17:10 ` Dave Taht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox