<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Feb 10, 2017, at 10:29 AM, Jonathan Morton <<a href="mailto:chromatix99@gmail.com" class="">chromatix99@gmail.com</a>> wrote:</div><div class=""><div class=""><br class=""><blockquote type="cite" class="">On 10 Feb, 2017, at 11:21, Pete Heist <<a href="mailto:peteheist@gmail.com" class="">peteheist@gmail.com</a>> wrote:<br class=""><br class="">Here are the results at various bitrates (all half-duplex rate limiting on this CPU).<br class=""></blockquote><br class="">Hold on a minute. What does “half-duplex rate limiting” mean exactly?<br class=""><br class=""> - Jonathan Morton<br class=""><br class=""></div></div></blockquote></div><br class=""><div class="">Yes, that’s a good point, I probably invented the phrase “half-duplex rate limiting". :) It means that both the ingress and egress have been redirected over the same IFB device and QoS'd together. This seems to work better for the half-duplex nature of Wi-Fi, because then you can use soft rate limiting to control the queue and keep latency low while still allowing almost the full one-directional throughput. You made the suggestion earlier on the Cake list to try it, and it does work for me.</div><div class=""><br class=""></div><div class="">By the way, in case you want to see the qdisc setup, it’s there for each host under the qos_* sections on each page. The AP router is “mbp”, which I use for half-duplex limiting, then for full-duplex limiting it’s done both ends of the link- “mini” and “mbp”. And if you want to see the QoS setup script, it’s here:</div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/qos.sh" class="">http://www.drhleny.cz/bufferbloat/qos.sh</a></div><div class=""><br class=""></div><div class="">But what I hadn’t noticed before is that I so far haven’t seen the same throughput shifts in the so-called "full-duplex” rate limiting results, meaning I’m just limiting on the egress of both ends of the point-to-point link.</div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_10mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_10mbit/index.html</a></div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_20mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_20mbit/index.html</a></div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_30mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_30mbit/index.html</a></div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_40mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_40mbit/index.html</a></div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_45mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_45mbit/index.html</a></div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/cake_fd-eth-ap_70ms_40mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/cake_fd-eth-ap_70ms_40mbit/index.html</a></div><div class=""><br class=""></div><div class="">And “full-duplex limiting” for fq_codel:</div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_10mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_10mbit/index.html</a></div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_20mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_20mbit/index.html</a></div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_30mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_30mbit/index.html</a></div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_40mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_40mbit/index.html</a></div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_45mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/fq_codel_fd-eth-both_45mbit/index.html</a></div><div class=""><br class=""></div><div class="">But what I _do_ see now in the full-duplex limiting results is not throughput shifts, but occasional latency shifts for individual flows, like in the 30 Mbit full-duplex Cake result:</div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_30mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/cake_fd-eth-both_30mbit/index.html</a></div><div class=""><br class=""></div><div class="">and when I was testing lowering Cake’s rtt parameter I saw it (perhaps unrelated to changing the rtt parameter), so here’s rtt 70ms, bandwidth 40 Mbit:</div><div class=""><br class=""></div><div class=""><a href="http://www.drhleny.cz/bufferbloat/cake_fd-eth-ap_70ms_40mbit/index.html" class="">http://www.drhleny.cz/bufferbloat/cake_fd-eth-ap_70ms_40mbit/index.html</a></div><div class=""><br class=""></div><div class="">I’m still parsing all of these results and haven’t figured everything out yet, so thanks for making me look at that again. It does appear that the throughput shifts may be related to rate limiting both egress and ingress over the same IFB device. However, I have not seen such throughput shifts for HTB+fq_codel when rate limited in the same way...</div><div class=""><br class=""></div><div class="">Pete</div><div class=""><br class=""></div></body></html>