<div dir="ltr"><br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>
> On the 3800, it never meets the rate, but it's only off by maybe 5%.<br>
<br>
</span> As Jonathan pointed out already this is in the range of the difference between raw rates and tcp good put, so nothing to write home about ;)<br></blockquote><div><br></div><div>Yeah, I'm not too worried about that 5%, based on that explanation.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span><br>
> But on my new WRT1900AC, it's wildly off, even over the same performance range (I tested it from 80-220Mbps rates in 20Mbps jumps, and saw from 40-150Mbps.<br>
<br>
</span> So you started with the WRT1900AC where the wndr3800 dropped off? I wonder maybe the Belkin is also almost linear for the lower range?</blockquote><div><br></div><div>Yeah, good point on a methodology fail. I'll run another series of tests walking up the same series of rate limits and see what I get.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"> I also note we adjust the quantum based on the rates:<br>
from functions .sh:<br>
get_mtu() {<br></blockquote><div>... snip </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">}<br>
<br>
which we use in the htb invocations via this indirection:<br>
LQ="quantum `get_mtu $IFACE $CEIL`”<br>
<br></blockquote><div><br></div><div>That is odd, and that's quite the aggressive curve on quantum, doubling every 10-20Mbps. </div><div><br></div><div>I did some math, and plotted out the quantum vs. bandwidth based on that snippet of code (and assuming a 1500-byte MTU):</div><div><br></div><div><br><img src="cid:ii_iahb3yb21_14dbb78be4ff0cbf" width="407" height="256"><br>And then plotted out the corresponding time in ms that each quantum bytes (it is bytes, right?) is on the wire:<br></div><div><br></div><div><img src="cid:ii_iahb5j3j2_14dbb79dca182cd3" width="382" height="256"><br>Which I think is a really interesting plot (and here are the points that line up with the steps in the script):<br></div><div><br></div><div>kbps = quantum = time</div><div><div>20000 = 3000 = 1.2ms</div><div>30000 = 6000 = 1.6ms</div><div>40000 = 12000 = 2.4ms</div><div>50000 = 24000 = 3.84ms</div><div>60000 = 48000 = 6.4ms</div><div>80000 = 96000 = 9.6ms</div></div><div><br></div><div>So it appears that the goal of these values was to keep increases the quantum as rates went up to provide more bytes per operation, but that's going to risk adding latency as the time-per-quantum crosses the delay target in fq_codel (if I'm understanding this correctly).</div><div><br></div><div>So one thing that I can do is play around with this, and see if I can keep that quantum time at a linear level (ie, 10ms, which seems _awfully_ long), or continue increasing it (which seems like a bad idea). I'd love to hear from whoever put this in as to what it's goal was (or was it just empirically tuned?)</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>><br>
> I have no idea where to start looking for the cause. But for now, I'm just setting my ingress rate MUCH higher than I should, because it's working out to the right value as a result.<br>
<br>
</span> It would be great to understand why we need to massively under-shape in that situation to get decent shaping and decent latency under load.<br></blockquote><div><br></div><div>Agreed.</div><div><br></div><div>-Aaron </div></div></div></div>