<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">Hi David,<div><br></div><div><br>On Jul 26, 2014, at 22:39 , David Lang <<a href="mailto:david@lang.hm">david@lang.hm</a>> wrote:<br><br><blockquote type="cite">On Sat, 26 Jul 2014, Sebastian Moeller wrote:<br><br><blockquote type="cite">Hi David,<br><br><br>On Jul 25, 2014, at 23:03 , David Lang <<a href="mailto:david@lang.hm">david@lang.hm</a>> wrote:<br><br><blockquote type="cite">On Fri, 25 Jul 2014 14:37:34 -0400, <a href="mailto:Valdis.Kletnieks@vt.edu">Valdis.Kletnieks@vt.edu</a> wrote:<br><blockquote type="cite">On Sat, 24 May 2014 10:02:53 -0400, "R." said:<br><br><blockquote type="cite">Further, this function could be auto-scheduled or made enabled on<br>router boot up.<br></blockquote><br>Yeah, if such a thing worked, it would be good.<br><br>(Note in the following that a big part of my *JOB* is doing "What could<br>possibly go wrong?" analysis on mission-critical systems, which tends<br>to color<br>my viewpoint on projects. I still think the basic concept is good, just<br>difficult to do, and am listing the obvious challenges for anybody brave<br>enough to tackle it... :)<br><br><blockquote type="cite">I must be missing something important which prevents this. What is it?<br></blockquote><br>There's a few biggies. The first is what the linux-kernel calls -ENOPATCH -<br>nobody's written the code. The second is you need an upstream target<br>someplace<br>to test against. You need to deal with both the "server is unavalailable due<br>to a backhoe incident 2 time zones away" problem (which isn't *that*<br>hard, just<br>default to Something Not Obviously Bad(TM), and "server is slashdotted" (whci<br>is a bit harder to deal with. Remember that there's some really odd corner<br>cases to worry about - for instance, if there's a power failure in a<br>town, then<br>when the electric company restores power you're going to have every<br>cerowrt box<br>hit the server within a few seconds - all over the same uplink most<br>likely. No<br>good data can result from that... (Holy crap, it's been almost 3<br>decades since<br>I first saw a Sun 3/280 server tank because 12 Sun 3/50s all rebooted<br>over the<br>network at once when building power was restored).<br><br>And if you're in Izbekistan and the closest server netwise is at 60<br>Hudson, the<br>analysis to compute the correct values becomes.... interesting.<br><br>Dealing with non-obvious error conditions is also a challenge - a router<br>may only boot once every few months. And if you happen to be booting just<br>as a BGP routing flap is causing your traffic to take a vastly suboptimal<br>path, you may end up encoding a vastly inaccurate setting and have it stuck<br>there, causing suckage for non-obvious reasons for the non-technical, so you<br>really don't want to enable auto-tuning unless you also have a good plan for<br>auto-*RE*tuning....<br></blockquote><br>have the router record it's finding, and then repeat the test periodically, recording it's finding as well. If the new finding is substantially different from the prior ones, schedule a retest 'soon' (or default to the prior setting if it's bad enough), otherwise, if there aren't many samples, schedule a test 'soon' if there are a lot of samples, schedule a test in a while.<br></blockquote><br><span class="Apple-tab-span" style="white-space:pre"> </span>Yeah, keeping some history to “predict” when to measure next sounds clever.<br><br><blockquote type="cite"><br>However, I think the big question is how much the tuning is required.<br></blockquote><br>I assume in most cases you need to measure the home-routers bandwidth rarely (say on DSL only after a re-sync with the DSLAM), but you need to measure the bandwidth early as only then you can properly shape the downlink. And we need to know the link’s capacity to use traffic shaping so that BQL and fq_codel in the router have control over the bottleneck queue… An equivalent of BQL and fq_codel running in the DSLAM/CMTS and CPE obviously would be what we need, because then BQL and fq_codel on the router would be all that is required. But that does not seem like it is happening anytime soon, so we still need to workaround the limitations in the equipment fr a long time to come, I fear.<br></blockquote><br>by how much tuning is required, I wasn't meaning how frequently to tune, but how close default settings can come to the performance of a expertly tuned setup.<br></blockquote><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>Good question.</div><br><blockquote type="cite"><br>Ideally the tuning takes into account the characteristics of the hardware of the link layer. If it's IP encapsulated in something else (ATM, PPPoE, VPN, VLAN tagging, ethernet with jumbo packet support for example), then you have overhead from the encapsulation that you would ideally take into account when tuning things.<br><br>the question I'm talking about below is how much do you loose compared to the idea if you ignore this sort of thing and just assume that the wire is dumb and puts the bits on them as you send them? By dumb I mean don't even allow for inter-packet gaps, don't measure the bandwidth, don't try to pace inbound connections by the timing of your acks, etc. Just run BQL and fq_codel and start the BQL sizes based on the wire speed of your link (Gig-E on the 3800) and shrink them based on long-term passive observation of the sender.<br></blockquote><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>As data talks I just did a quick experiment with my ADSL2+ koine at home. The solid lines in the attached plot show the results for proper shaping with SQM (shaping to 95% of del link rates of downstream and upstream while taking the link layer properties, that is ATM encapsulation and per packet overhead into account) the broken lines show the same system with just the link layer adjustments and per packet overhead adjustments disabled, but still shaping to 95% of link rate (this is roughly equivalent to 15% underestimation of the packet size). The actual theist is netperf-wrappers RRUL (4 tcp streams up, 4 tcp steams down while measuring latency with ping and UDP probes). As you can see from the plot just getting the link layer encapsulation wrong destroys latency under load badly. The host is ~52ms RTT away, and with fq_codel the ping time per leg is just increased one codel target of 5ms each resulting in an modest latency increase of ~10ms with proper shaping for a total of ~65ms, with improper shaping RTTs increase to ~95ms (they almost double), so RTT increases by ~43ms. Also note how the extremes for the broken lines are much worse than for the solid lines. In short I would estimate that a slight misjudgment (15%) results in almost 80% increase of latency under load. In other words getting the rates right matters a lot. (I should also note that in my setup there is a secondary router that limits RTT to max 300ms, otherwise the broken lines might look even worse...)</div><div><img height="742" width="1147" apple-width="yes" apple-height="yes" apple-inline="yes" id="2524419C-3387-47B7-9D15-AC99AF50966D" src="cid:50330CB3-E160-43B1-83E3-4F87B9DBE938@home.lan"></div><div><br></div><br><blockquote type="cite"><br>If you end up only loosing 5-10% of your overall network performance by ignoring the details of the wire, then we should ignore them by default.<br><br>If however, not measuring anything first results in significantly worse performance than a tuned setup, then we need to figure out how to do the measurements needed for tuning.<br></blockquote><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>Agreed.</div><br><blockquote type="cite"><br>Some people seem to have fallen into the "perfect is the enemy of good enough" trap on this topic. They are so fixated on getting the absolute best performance out of a link that they are forgetting how bad the status-quo is right now.<br><br>If you look at the graph that Dave Taht put on page 6 of his slide deck <a href="http://snapon.lab.bufferbloat.net/~d/Presos/CaseForComprehensiveQueueManagement/assets/player/KeynoteDHTMLPlayer.html#5">http://snapon.lab.bufferbloat.net/~d/Presos/CaseForComprehensiveQueueManagement/assets/player/KeynoteDHTMLPlayer.html#5</a> it's important to realize that even the worst of the BQL+fq_codel graphs is worlds better than the default setting, while it would be nice to get to the green trace on the left, even getting to the middle traces instead of the black trace on the right would be a huge win for the public.<br></blockquote><div><br></div><div><span class="Apple-tab-span" style="white-space:pre"> </span>Just to note in the plot above the connection to the DSL modem was always mediated by fq_codel and BQL? and since shaping was used BQL would not come into effect…</div><div><br></div><div>Best Regards</div><div><span class="Apple-tab-span" style="white-space:pre"> </span>Sebastian</div><br><blockquote type="cite"><br>David Lang<br><br><br><blockquote type="cite"><blockquote type="cite">If a connection with BQL and fq_codel is 90% as good as a tuned setup, default to untuned unless the user explicitly hits a button to measure (and then a second button to accept the measurement)<br><br>If BQL and fw_codel by default are M70% as good as a tuned setup, there's more space to argue that all setups must be tuned, but then the question is how to they fare against a old, non-BQL, non-fq-codel setup? if they are considerably better, it may still be worthwhile.<br></blockquote></blockquote></blockquote><br></div></body></html>