<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 13, 2016 at 11:05 AM, Bob McMahon <span dir="ltr"><<a href="mailto:bob.mcmahon@broadcom.com" target="_blank">bob.mcmahon@broadcom.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">The graphs are histograms of mpdu/ampdu, from 1 to 64. The blue spikes show that the vast majority of traffic is filling an ampdu with 64 mpdus. The fill stop reason is ampdu full. The purple fill stop reasons are that the sw fifo (above the driver) went empty indicating a too small CWND for maximum aggregation. </div></blockquote><div><br></div><div>Can I get you to drive this plot, wifi rate limited to 6,20,and 300mbits and "native", with flent's tcp_upload and rtt_fair_up tests?</div><div><br></div><div>My goal is far different than getting a single tcp flow to max speed, it is to get it to close to full throughput with multiple flows while not accumulating 2 sec of buffering...</div><div><br></div><div><a href="http://blog.cerowrt.org/post/rtt_fair_on_wifi/">http://blog.cerowrt.org/post/rtt_fair_on_wifi/</a></div><div><br></div><div>Or even 100ms of it:</div><div><br></div><div><a href="http://blog.cerowrt.org/post/ath10_ath9k_1/">http://blog.cerowrt.org/post/ath10_ath9k_1/</a><br></div><div><br></div><div><br></div><div>Early experiments with getting a good rate estimate "to fill the queue" from rate control info was basically successful, but lacking rate control, using dql only, is currently taking much longer at higher rates, but works well at lower ones.<br></div><div><br></div><div><a href="http://blog.cerowrt.org/post/dql_on_wifi_2/">http://blog.cerowrt.org/post/dql_on_wifi_2/</a><br></div><div><div><br class=""><a href="http://blog.cerowrt.org/post/dql_on_wifi">http://blog.cerowrt.org/post/dql_on_wifi</a></div><div><br></div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">A driver wants to aggregate to the fullest extent possible. </div></blockquote><div><br></div><div> While still retaining tcp congestion control. There are other nuances, like being nice</div><div>about total airtime to others sharing the media, minimizing retries due to an overlarge ampdu for the current BER, etc.</div><div><br></div><div>I don't remember what section of the 802.11-2012 standard this is from, but:</div><div><div><br></div><div>```</div><div>Another unresolved issue is how large a concatenation threshold the devices should set. Ideally, the maximum value is preferable but in a noisy environment, short frame lengths are preferred because of potential retransmissions. The A-MPDU concatenation scheme operates only over the packets that are already buffered in the transmission queue, and thus, if the CPR data rate is low, then efficiency also will be small. There are many ongoing studies on alternative queuing mechanisms different from the standard FIFO. *A combination of frame aggregation and an enhanced queuing algorithm could increase channel efficiency further*.</div><div>```</div></div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"> A work around is to set initcwnd in the router table.</div></blockquote><div><br></div><div>Ugh. Um... no... initcwnd 10 is already too large for many networks. If you set your wifi initcwnd to something like 64, what happens to the 5mbit cable uplink just upstream from that?</div><div><br></div><div>There are a couple other parameters that might be of use - tcp_send_lowat and tcp_limit_output_bytes. These were set off, and originally too low for wifi. A good setting for the latter, for ethernet, was about 4096. Then the wifi folk complained, and it got bumped to 64k, and I think now, it's at 256k to make the xen folk happier.</div><div><br></div><div>These are all work arounds against the real problem which was not tuning driver queueing to the actual achievable ampdu, and doing fq+aqm to spread the load (essentially "pace" bursts) which is what is happening in michal's patches.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>I don't have the data available for multiple flows at the moment. </div></div></blockquote><div><br></div><div>The world is full of folk trying to make single tcp flows go at maximum speed, with multiple alternatives to cubic. This quest has resulted in the near elimination of the sawtooth along the edge and horrific overbuffering, to a net loss in speed, and a huge perception of "slowness".</div><div><br></div><div>Note: I have long figured that a different tcp should be used on wifi uplinks, after we fixed a ton of basic mis-assumptions. As well as tcp's should become more wifi/wireless aware, but tweaking initcwnd, tcp_limit_output_bytes, etc, is not the right thing.</div><div><br></div><div>There has been some good tcp research published of late, look into "BBR", and "CDG".</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Note: That will depend on what exactly defines a flow.</div><span class=""><font color="#888888"><div><br></div><div>Bob </div></font></span></div><div class=""><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 13, 2016 at 10:49 AM, Dave Taht <span dir="ltr"><<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">I try to stress that single tcp flows should never use all the bandwidth for the sawtooth to function properly.<div><br></div><div>What happens when you hit it with 4 flows? or 12?</div><div><div class="gmail_extra"><br></div><div class="gmail_extra">nice graph, but I don't understand the single blue spikes?</div><div class="gmail_extra"><div><div><br><div class="gmail_quote">On Fri, May 13, 2016 at 10:46 AM, Bob McMahon <span dir="ltr"><<a href="mailto:bob.mcmahon@broadcom.com" target="_blank">bob.mcmahon@broadcom.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">On driver delays, from a driver development perspective the problem isn't to add delay or not (it shouldn't) it's that the TCP stack isn't presenting sufficient data to fully utilize aggregation. Below is a histogram comparing aggregations of 3 systems (units are mpdu per ampdu.) The lowest latency stack is in purple and it's also the worst performance with respect to average throughput. From a driver perspective, one would like TCP to present sufficient bytes into the pipe that the histogram leans toward the blue. <div><br><div><img src="cid:ii_154ab2f3fa1213b7" alt="Inline image 1" width="553" height="415"><br></div><div>I'm not an expert on TCP near congestion avoidance but maybe the algorithm could benefit from RTT as weighted by CWND (or bytes in flight) and hunt that maximum? </div><span><font color="#888888"><div><br></div><div>Bob</div></font></span></div></div><div class="gmail_extra"><br><div class="gmail_quote"><span>On Mon, May 9, 2016 at 8:41 PM, David Lang <span dir="ltr"><<a href="mailto:david@lang.hm" target="_blank">david@lang.hm</a>></span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div><div><span>On Mon, 9 May 2016, Dave Taht wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
On Mon, May 9, 2016 at 7:25 PM, Jonathan Morton <<a href="mailto:chromatix99@gmail.com" target="_blank">chromatix99@gmail.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
On 9 May, 2016, at 18:35, Dave Taht <<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>> wrote:<br>
<br>
should we always wait a little bit to see if we can form an aggregate?<br>
</blockquote>
<br>
I thought the consensus on this front was “no”, as long as we’re making the decision when we have an immediate transmit opportunity.<br>
</blockquote>
<br>
I think it is more nuanced than how david lang has presented it.<br>
</blockquote>
<br></span>
I have four reasons for arguing for no speculative delays.<br>
<br>
1. airtime that isn't used can't be saved.<br>
<br>
2. lower best-case latency<br>
<br>
3. simpler code<br>
<br>
4. clean, and gradual service degredation under load.<br>
<br>
the arguments against are:<br>
<br>
5. throughput per ms of transmit time is better if aggregation happens than if it doesn't.<br>
<br>
6. if you don't transmit, some other station may choose to before you would have finished.<br>
<br>
#2 is obvious, but with the caviot that anytime you transmit you may be delaying someone else.<br>
<br>
#1 and #6 are flip sides of each other. we want _someone_ to use the airtime, the question is who.<br>
<br>
#3 and #4 are closely related.<br>
<br>
If you follow my approach (transmit immediately if you can, aggregate when you have a queue), the code really has one mode (plus queuing). "If you have a Transmit Oppertunity, transmit up to X packets from the queue", and it doesn't matter if it's only one packet.<br>
<br>
If you delay the first packet to give you a chance to aggregate it with others, you add in the complexity and overhead of timers (including cancelling timers, slippage in timers, etc) and you add "first packet, start timers" mode to deal with.<br>
<br>
I grant you that the first approach will "saturate" the airtime at lower traffic levels, but at that point all the stations will start aggregating the minimum amount needed to keep the air saturated, while still minimizing latency.<br>
<br>
I then expect that application related optimizations would then further complicate the second approach. there are just too many cases where small amounts of data have to be sent and other things serialize behind them.<br>
<br>
DNS lookup to find a domain to then to a 3-way handshake to then do a request to see if the <web something> library has been updated since last cached (repeat for several libraries) to then fetch the actual page content. All of these thing up to the actual page content could be single packets that have to be sent (and responded to with a single packet), waiting for the prior one to complete. If you add a few ms to each of these, you can easily hit 100ms in added latency. Once you start to try and special cases these sorts of things, the code complexity multiplies.<br>
<br>
So I believe that the KISS approach ends up with a 'worse is better' situation.<span><font color="#888888"><br>
<br>
David Lang</font></span><br></div></div><span>_______________________________________________<br>
Make-wifi-fast mailing list<br>
<a href="mailto:Make-wifi-fast@lists.bufferbloat.net" target="_blank">Make-wifi-fast@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/make-wifi-fast" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/make-wifi-fast</a><br>
<br></span></blockquote></div><br></div>
</blockquote></div><br><br clear="all"><div><br></div></div></div><span>-- <br><div>Dave Täht<br>Let's go make home routers and wifi faster! With better software!<br><a href="http://blog.cerowrt.org" target="_blank">http://blog.cerowrt.org</a></div>
</span></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Dave Täht<br>Let's go make home routers and wifi faster! With better software!<br><a href="http://blog.cerowrt.org" target="_blank">http://blog.cerowrt.org</a></div>
</div></div>