* [Cerowrt-users] SQM Setup and Performance
@ 2014-01-30 17:06 Jeremy Tourville
2014-01-30 17:42 ` Dave Taht
2014-01-30 19:29 ` Sebastian Moeller
0 siblings, 2 replies; 3+ messages in thread
From: Jeremy Tourville @ 2014-01-30 17:06 UTC (permalink / raw)
To: cerowrt-users
[-- Attachment #1: Type: text/plain, Size: 2403 bytes --]
Hello, I followed your excellent instructions here -
http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_SQM_for_CeroWrt_310
I am using build 3.10.24-8
I am using a DSL line rated at 6 Mbps down and 512kbps up. My real throughput without SQM enabled is 5.7Mbps down and 450kbps up.
After enabling SQM my throughput has dropped to approximately 4.5Mbps down and 350 kbps up. Does this seem like an amount that is expected? (within norms?)
It would seem reasonable that I should expect some performance loss at the expense of better bufferbloat management based on setting 85-95% of actual download/upload speeds. Please correct me if I am wrong. :-) But my question is, how much is too much? The setting of SQM does fix the bufferbloat issue as evidenced by ping testing and times for packets. With SQM on all packets were 100ms or less. With SQM off the times jumped to over 500ms or more during the speed testing.
For reference I have set the parameters as indicated in the screenshots. I have changed only two variables and tested after each change as indicated in the grid below.
Que setup script
Per packet
overhead
test
results
Test #1
simple.qos
40
no buffer, less
throughput
<100ms, 4.5mbps
down
Test #2
simple.qos
44
no buffer, less
throughput
<100ms, 4.5mbps
down
Test #3
simplest.qos
40
no buffer, less
throughput
<100ms, 4.5mbps
down
Test #4
simplest.qos
44
no buffer, less
throughput
<100ms, 4.5mbps
down
I also read your statement-
>>>"The CeroWrt development team has been working to nail down a no-brainer set of instructions for eliminating bufferbloat - the lag/latency that kills voice & video chat, gaming, and overall network responsiveness. The hard part is that optimal configuration of the Smart Queue Management (SQM) link is difficult - there are tons of options an ISP can set. Although CeroWrt can adapt to any of them, it's difficult to find out the exact characteristics of the link you have."
What info do I need to get from my ISP to best optimize my connection?
I also recognize that this could be an issue that requires multiple changes at once. I am curious to know from the experts what your thoughts are on this. Many thanks in advance!
-Jeremy
[-- Attachment #2: Type: text/html, Size: 8199 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Cerowrt-users] SQM Setup and Performance
2014-01-30 17:06 [Cerowrt-users] SQM Setup and Performance Jeremy Tourville
@ 2014-01-30 17:42 ` Dave Taht
2014-01-30 19:29 ` Sebastian Moeller
1 sibling, 0 replies; 3+ messages in thread
From: Dave Taht @ 2014-01-30 17:42 UTC (permalink / raw)
To: Jeremy Tourville; +Cc: cerowrt-users
[-- Attachment #1: Type: text/plain, Size: 3926 bytes --]
On Thu, Jan 30, 2014 at 9:06 AM, Jeremy Tourville <organ_dr@hotmail.com>wrote:
> Hello, I followed your excellent instructions here -
>
> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_SQM_for_CeroWrt_310
>
> I am using build 3.10.24-8
>
> I am using a DSL line rated at 6 Mbps down and 512kbps up. My real
> throughput without SQM enabled is 5.7Mbps down and 450kbps up.
>
> After enabling SQM my throughput has dropped to approximately 4.5Mbps down
> and 350 kbps up. Does this seem like an amount that is expected? (within
> norms?)
>
I recommend tuning, using reasonable benchmarks, like rrul.
Generally you can get pretty close to your provider's provided bandwidth,
but repeated tuning is something we try really hard to avoid. We know 85%
always works. :)
Hopefully we'll come up with a tool or approach that works dynamically one
day, but we're not there yet.
So create a setting, run a benchmark, change a knob, run the benchmark,
until you get something satisfying
example:
http://snapon.lab.bufferbloat.net/~cero2/jimreisert/results.html
notes:
At the moment nfq_codel is a mildly bigger win than fq_codel is at
bandwidths below 1mbit.
there was a change to sqm in releases after this ( I think ). It used to
allocate a fairly large
amount of bandwidth for priority traffic (64kbit I recall), now it does 12
or so. rrul exercises all queues.
you might want to fiddle with target a little (target 20ms)
> It would seem reasonable that I should expect some performance loss at the
> expense of better bufferbloat management based on setting 85-95% of actual
> download/upload speeds. Please correct me if I am wrong. :-) But my
> question is, how much is too much? The setting of SQM does fix the
> bufferbloat issue as evidenced by ping testing and times for packets. With
> SQM on all packets were 100ms or less. With SQM off the times jumped to
> over 500ms or more during the speed testing.
>
> For reference I have set the parameters as indicated in the screenshots.
> I have changed only two variables and tested after each change as indicated
> in the grid below.
>
> Que setup script Per packet overhead test results Test #1 simple.qos
> 40 no buffer, less throughput <100ms, 4.5mbps down Test #2 simple.qos 44 no
> buffer, less throughput <100ms, 4.5mbps down Test #3 simplest.qos 40 no
> buffer, less throughput <100ms, 4.5mbps down Test #4 simplest.qos 44 no
> buffer, less throughput <100ms, 4.5mbps down
>
>
whether overhead of 44 or 40 is correct for your provider...
> I also read your statement-
> >>>"The CeroWrt development team has been working to nail down a
> no-brainer set of instructions for eliminating bufferbloat - the
> lag/latency that kills voice & video chat, gaming, and overall network
> responsiveness. The hard part is that optimal configuration of the Smart
> Queue Management (SQM) link is difficult - there are tons of options an ISP
> can set. Although CeroWrt can adapt to any of them, it's difficult to find
> out the exact characteristics of the link you have."
>
> What info do I need to get from my ISP to best optimize my connection?
>
Ask 'em to do their own benchmarking with cero & rrul, adopt fq_codel on
their dslams and (especially) rate-limiters, and publish their results for
each tier they sell?
>I also recognize that this could be an issue that requires multiple
changes at once. I am curious to know from the experts what your thoughts
are on this. Many thanks in advance!
I think you can get closer than you got.
>
> -Jeremy
>
> _______________________________________________
> Cerowrt-users mailing list
> Cerowrt-users@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-users
>
>
--
Dave Täht
Fixing bufferbloat with cerowrt:
http://www.teklibre.com/cerowrt/subscribe.html
[-- Attachment #2: Type: text/html, Size: 10448 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Cerowrt-users] SQM Setup and Performance
2014-01-30 17:06 [Cerowrt-users] SQM Setup and Performance Jeremy Tourville
2014-01-30 17:42 ` Dave Taht
@ 2014-01-30 19:29 ` Sebastian Moeller
1 sibling, 0 replies; 3+ messages in thread
From: Sebastian Moeller @ 2014-01-30 19:29 UTC (permalink / raw)
To: Jeremy Tourville; +Cc: cerowrt-users
Hi Jeremy,
On Jan 30, 2014, at 18:06 , Jeremy Tourville <organ_dr@hotmail.com> wrote:
> Hello, I followed your excellent instructions here -
> http://www.bufferbloat.net/projects/cerowrt/wiki/Setting_up_SQM_for_CeroWrt_310
>
> I am using build 3.10.24-8
>
> I am using a DSL line rated at 6 Mbps down and 512kbps up. My real throughput without SQM enabled is 5.7Mbps down and 450kbps up.
This looks interesting, the ATM 48 bytes in 53 byte cell encapsulation used in ADSL only leaves 100*48/53 = 90.57% percent of the specified line rate available for your traffic (but that still contains further per packet overhead). So I would assume that the actual liberate is >6.3Mbps? Do you have any chance of verifying the line rate to the DSLAM? Many modems/dsl-routers have offer a web page that gives some statistics and information like the line rate. You will need the line rate as precise as possible if you want to minimize the bandwidth "sacrifice" needed to keep latencies reasonable (if in doubt err on the too small side though).
> After enabling SQM my throughput has dropped to approximately 4.5Mbps down and 350 kbps up.
So you specified5.7 and 0.45? Then the link layer adaptation mechanism will try to account for the 48in53 problem and cut down your available rates by ~10% so the best you can expect is ~5.1 and 0.4. If you specified 90% of the measured speed you are already at 5.7 * 0.9 (90% of measured) = 5.13 * 0.9 (fixed ATM overhead) = 4.617 and 450 * 0.9 (90% of measured) = 405 * 0.9 (fixed ATM overhead) = 364.5. So you can not rally expect much more than you got.
> Does this seem like an amount that is expected? (within norms?)
> It would seem reasonable that I should expect some performance loss at the expense of better bufferbloat management based on setting 85-95% of actual download/upload speeds. Please correct me if I am wrong. :-)
Now, my recommendation for ADSL links (well all ATM based links actually) is to start out with the line rate and reduce from there. (The ATM link layer adjustment effectively reduces to 90% of link rate already as explained above)
> But my question is, how much is too much?
So the multistep procedure is roughly as follows. Start with full line rate and no link layer adjustments: measure the ping RTT to the nearest host that responds with no additional traffic; that gives you the best case latency base line of your link. Next load the link with a speed test and run the ping again; that gives you a bad case (for the worst case you need to saturate both up and downlink at the same time, while measuring the ping RTT). Then activate the link layer adjustments with your line rates specified and repeat the test under load; reduce the rates by 5% and repeat. Most likely you will notice that each reduction in bandwidth will also reduce the latency. You then can see the different possible bandwidth/latency trade-offs possible on your link, just pic the one you are most comfortable with.
Then use your link normally, but every now and then, when the link is loaded repeat the ping test and see whether you are still happy, if not adjust the rates.
> The setting of SQM does fix the bufferbloat issue as evidenced by ping testing and times for packets. With SQM on all packets were 100ms or less.
It is interesting to compare this with the ping time to the same host without any load on your network.
> With SQM off the times jumped to over 500ms or more during the speed testing.
>
> For reference I have set the parameters as indicated in the screenshots. I have changed only two variables and tested after each change as indicated in the grid below.
>
> Que setup script Per packet overhead test results
> Test #1 simple.qos 40 no buffer, less throughput <100ms, 4.5mbps down
> Test #2 simple.qos 44 no buffer, less throughput <100ms, 4.5mbps down
> Test #3 simplest.qos 40 no buffer, less throughput <100ms, 4.5mbps down
> Test #4 simplest.qos 44 no buffer, less throughput <100ms, 4.5mbps down
simple.qos and simplest.qos should have no real effect on either latency or bandwidth, matching your results. The overhead is a tiny bit trickier, if you underestimate the overhead you can, under certain conditions, still cause buffer bloat in your modem (but these conditions are tricky to recreate, so underestimation will stochastically show up as latency spies every now and then under load), if you overestimate the overhead you sacrifice more bandwidth than necessary (since the overhead is per packet, this bandwidth sacrifice depends on the typical size of packets you send). The idea is to just pick the rift values for your encapsulation and be done with.
My current understanding is that 48 bytes is the worst case overhead, but I have not yet seen a link with that, 44 bytes however are not uncommon, so 44 is not the worst place to start out with. (Note it is possible to empirically measure the overhead on your link, and I would be happy to help you with this, just contact me if interested).
>
> I also read your statement-
> >>>"The CeroWrt development team has been working to nail down a no-brainer set of instructions for eliminating bufferbloat - the lag/latency that kills voice & video chat, gaming, and overall network responsiveness. The hard part is that optimal configuration of the Smart Queue Management (SQM) link is difficult - there are tons of options an ISP can set. Although CeroWrt can adapt to any of them, it's difficult to find out the exact characteristics of the link you have."
>
> What info do I need to get from my ISP to best optimize my connection?
The actual line rates for down- and uplink as well as the full encapsulation information: DHCP or PPPoE or PPPoA; VC-MUX or LLC/SNAP; VLAN or no vlan.
>
> I also recognize that this could be an issue that requires multiple changes at once.
You are doing fine, all it needs is a few experiments/measurements to find the best values after that you can basically stick to those.
> I am curious to know from the experts what your thoughts are on this.
That would be Dave then (all I know about is the atm issues, as I still have an adel link)
Best Regards
Sebastian
> Many thanks in advance!
>
> -Jeremy
> _______________________________________________
> Cerowrt-users mailing list
> Cerowrt-users@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-users
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2014-01-30 19:29 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-01-30 17:06 [Cerowrt-users] SQM Setup and Performance Jeremy Tourville
2014-01-30 17:42 ` Dave Taht
2014-01-30 19:29 ` Sebastian Moeller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox