<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jun 3, 2015 at 3:27 PM, Dave Taht <span dir="ltr"><<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="h5">On Wed, Jun 3, 2015 at 3:16 PM, Aaron Wood <span dir="ltr"><<a href="mailto:woody77@gmail.com" target="_blank">woody77@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>
> On the 3800, it never meets the rate, but it's only off by maybe 5%.<br>
<br>
</span> As Jonathan pointed out already this is in the range of the difference between raw rates and tcp good put, so nothing to write home about ;)<br></blockquote><div><br></div></span><div>Yeah, I'm not too worried about that 5%, based on that explanation.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span><br>
> But on my new WRT1900AC, it's wildly off, even over the same performance range (I tested it from 80-220Mbps rates in 20Mbps jumps, and saw from 40-150Mbps.<br>
<br>
</span> So you started with the WRT1900AC where the wndr3800 dropped off? I wonder maybe the Belkin is also almost linear for the lower range?</blockquote><div><br></div></span><div>Yeah, good point on a methodology fail. I'll run another series of tests walking up the same series of rate limits and see what I get.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> I also note we adjust the quantum based on the rates:<br>
from functions .sh:<br>
get_mtu() {<br></blockquote></span><div>... snip </div><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">}<br>
<br>
which we use in the htb invocations via this indirection:<br>
LQ="quantum `get_mtu $IFACE $CEIL`”<br>
<br></blockquote><div><br></div></span><div>That is odd, and that's quite the aggressive curve on quantum, doubling every 10-20Mbps. </div><div><br></div><div>I did some math, and plotted out the quantum vs. bandwidth based on that snippet of code (and assuming a 1500-byte MTU):</div><div><br></div><div><br><img src="cid:ii_iahb3yb21_14dbb78be4ff0cbf" width="407" height="256"><br>And then plotted out the corresponding time in ms that each quantum bytes (it is bytes, right?) is on the wire:<br></div><div><br></div><div><img src="cid:ii_iahb5j3j2_14dbb79dca182cd3" width="382" height="256"><br>Which I think is a really interesting plot (and here are the points that line up with the steps in the script):<br></div><div><br></div><div>kbps = quantum = time</div><div><div>20000 = 3000 = 1.2ms</div><div>30000 = 6000 = 1.6ms</div><div>40000 = 12000 = 2.4ms</div><div>50000 = 24000 = 3.84ms</div><div>60000 = 48000 = 6.4ms</div><div>80000 = 96000 = 9.6ms</div></div></div></div></div></blockquote><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>So it appears that the goal of these values was to keep increases the quantum as rates went up to provide more bytes per operation, but that's going to risk adding latency as the time-per-quantum crosses the delay target in fq_codel (if I'm understanding this correctly).</div><div><br></div><div>So one thing that I can do is play around with this, and see if I can keep that quantum time at a linear level (ie, 10ms, which seems _awfully_ long), or continue increasing it (which seems like a bad idea). I'd love to hear from whoever put this in as to what it's goal was (or was it just empirically tuned?)</div></div></div></div></blockquote><div><br></div></div></div><div>Empirical and tested only to about 60Mbits. I got back about 15% cpu to do it this way at the time I did it on the wndr3800.</div><div><br></div><div>and WOW, thx for the analysis! I did not think much about this crossover point at the time - because we'd maxed on cpu long beforehand. </div></div></div></div></blockquote><div><br></div><div>And most of my testing on x86 has been with this change to the htb quantum entirely disabled and set to 1514.</div><div><br></div><div>the "production" sqm-scripts and my own hacked up version(s) have differed in this respect for quite some time. (at least 6 months). </div><div><br></div><div>Great spot on this discrepancy.</div><div><br></div><div>:egg, otherwise, on face:</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>I can certainly see this batching interacting with the codel target.</div><div><br></div><div>On the other hand, you gotta not be running out of cpu in the first place. I am liking where cake is going.</div><div><br></div><div>One of my daydreams is that once we have writable custom ethernet hardware that we can easily do hardware outbound rate limiting/shaping merely by programming a register to return a completion interrupt at the set rate rather than the actual rate. </div></div></div></div></blockquote><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>><br>
> I have no idea where to start looking for the cause. But for now, I'm just setting my ingress rate MUCH higher than I should, because it's working out to the right value as a result.<br>
<br>
</span> It would be great to understand why we need to massively under-shape in that situation to get decent shaping and decent latency under load.<br></blockquote><div><br></div></span><div>Agreed.</div><span><font color="#888888"><div><br></div><div>-Aaron </div></font></span></div></div></div>
<br></span><span class="">_______________________________________________<br>
Cerowrt-devel mailing list<br>
<a href="mailto:Cerowrt-devel@lists.bufferbloat.net" target="_blank">Cerowrt-devel@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/cerowrt-devel" target="_blank">https://lists.bufferbloat.net/listinfo/cerowrt-devel</a><br>
<br></span></blockquote></div><br><br clear="all"><span class=""><div><br></div>-- <br><div>Dave Täht<br>What will it take to vastly improve wifi for everyone?<br><a href="https://plus.google.com/u/0/explore/makewififast" target="_blank">https://plus.google.com/u/0/explore/makewififast</a><br></div>
</span></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Dave Täht<br>What will it take to vastly improve wifi for everyone?<br><a href="https://plus.google.com/u/0/explore/makewififast" target="_blank">https://plus.google.com/u/0/explore/makewififast</a><br></div>
</div></div>