<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Nov 24, 2017, at 12:21 PM, Toke Høiland-Jørgensen <<a href="mailto:toke@toke.dk" class="">toke@toke.dk</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Dave Taht <<a href="mailto:dave@taht.net" class="">dave@taht.net</a>> writes:<br class=""><br class=""><blockquote type="cite" class="">Pete Heist <<a href="mailto:peteheist@gmail.com" class="">peteheist@gmail.com</a>> writes:<br class=""><br class=""><blockquote type="cite" class=""> On Nov 23, 2017, at 10:44 AM, Jonathan Morton <<a href="mailto:chromatix99@gmail.com" class="">chromatix99@gmail.com</a>> wrote:<br class=""><br class=""> This is most likely an interaction of the AQM with Linux' scheduling<br class=""> latency.<br class=""><br class=""> At the 'lan' setting, the time comstants are similar in magnitude to the<br class=""> delays induced by Linux itself, so congestion might be signalled<br class=""> prematurely. The flows will then become sparse and total throughput reduced,<br class=""> leaving little or no back-pressure for the fairness logic to work against.<br class=""></blockquote><br class="">Agreed. <br class=""><br class="">man page add:<br class=""><br class="">At the 'lan' setting(1ms), the time constants are similar in magnitude<br class="">to the jitter in the Linux kernel itself, so congestion might be<br class="">signalled prematurely. The flows will then become sparse and total<br class="">throughput reduced, leaving little or no back-pressure for the fairness<br class="">logic to work against. Use the "metro" setting for local lans unless you<br class="">have a custom kernel.<br class=""></blockquote><br class="">Erm, doesn't this make the 'lan' keyword pretty much useless? So why not<br class="">just remove it? Or redefine it to something that actually works? 3ms?<br class=""></div></div></blockquote></div><br class=""><div class="">Removing the bandwidth keywords altogether and going back to fq_codel’s specification of target and interval would be my personal preference (unless we can figure out how to make the keywords work well with one another in all cases).</div><div class=""><br class=""></div><div class="">If we want to keep them, I’ll look for more holes where things might not behave as people expect so we can further clear up the docs.</div><div class=""><br class=""></div><div class="">Also, note that even at ‘metro’, fairness is better, but still not fully fair on my hardware:</div><div class=""><br class=""></div><div class=""><a href="https://docs.google.com/spreadsheets/d/1SMXWw2fLfmBRU622urfdvA_Ujsuf_KQ4P3uyOH1skOM/edit#gid=2072687073" class="">https://docs.google.com/spreadsheets/d/1SMXWw2fLfmBRU622urfdvA_Ujsuf_KQ4P3uyOH1skOM/edit#gid=2072687073</a></div><div class=""><br class=""></div><div class="">With 950mbit rate limiting, it’s pretty good at 1.16:1. But with bql it’s still ~2:1. At what rtt will fairness actually work? That may depend on hardware, OS version, number of flows, whether bql is being used or not, or other factors. At rtt 100ms fairness worked well for this test with both bql and rate limiting.</div><div class=""><br class=""></div><div class="">Also note that I’m glossing over the fact that on my hardware (Ethernet device with four queues), sometimes there was a bandwidth asymmetry and corresponding fluctuation in fairness when using bql that changed with each flent run, so I had to retry the test once or twice to get a “good” run. I would try to quantify that separately before I get into it further. But that’s something that may also throw folks for a loop.</div><div class=""><br class=""></div></body></html>