<div dir="ltr">Also, I haven't done it but I don't think rate limiting TCP will solve this aggregation "problem." The faster RTT is driving CWND much below the maximum aggregation, i.e. CWND is too small relative to wi-fi aggregation.<div><br></div><div>Bob</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 13, 2016 at 11:05 AM, Bob McMahon <span dir="ltr"><<a href="mailto:bob.mcmahon@broadcom.com" target="_blank">bob.mcmahon@broadcom.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">The graphs are histograms of mpdu/ampdu, from 1 to 64. The blue spikes show that the vast majority of traffic is filling an ampdu with 64 mpdus. The fill stop reason is ampdu full. The purple fill stop reasons are that the sw fifo (above the driver) went empty indicating a too small CWND for maximum aggregation. A driver wants to aggregate to the fullest extent possible. A work around is to set initcwnd in the router table.<div><br></div><div>I don't have the data available for multiple flows at the moment. Note: That will depend on what exactly defines a flow.</div><span class="HOEnZb"><font color="#888888"><div><br></div><div>Bob </div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, May 13, 2016 at 10:49 AM, Dave Taht <span dir="ltr"><<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I try to stress that single tcp flows should never use all the bandwidth for the sawtooth to function properly.<div><br></div><div>What happens when you hit it with 4 flows? or 12?</div><div><div class="gmail_extra"><br></div><div class="gmail_extra">nice graph, but I don't understand the single blue spikes?</div><div class="gmail_extra"><div><div><br><div class="gmail_quote">On Fri, May 13, 2016 at 10:46 AM, Bob McMahon <span dir="ltr"><<a href="mailto:bob.mcmahon@broadcom.com" target="_blank">bob.mcmahon@broadcom.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">On driver delays, from a driver development perspective the problem isn't to add delay or not (it shouldn't) it's that the TCP stack isn't presenting sufficient data to fully utilize aggregation. Below is a histogram comparing aggregations of 3 systems (units are mpdu per ampdu.) The lowest latency stack is in purple and it's also the worst performance with respect to average throughput. From a driver perspective, one would like TCP to present sufficient bytes into the pipe that the histogram leans toward the blue. <div><br><div><img src="cid:ii_154ab2f3fa1213b7" alt="Inline image 1" width="553" height="415"><br></div><div>I'm not an expert on TCP near congestion avoidance but maybe the algorithm could benefit from RTT as weighted by CWND (or bytes in flight) and hunt that maximum? </div><span><font color="#888888"><div><br></div><div>Bob</div></font></span></div></div><div class="gmail_extra"><br><div class="gmail_quote"><span>On Mon, May 9, 2016 at 8:41 PM, David Lang <span dir="ltr"><<a href="mailto:david@lang.hm" target="_blank">david@lang.hm</a>></span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div><span>On Mon, 9 May 2016, Dave Taht wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Mon, May 9, 2016 at 7:25 PM, Jonathan Morton <<a href="mailto:chromatix99@gmail.com" target="_blank">chromatix99@gmail.com</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 9 May, 2016, at 18:35, Dave Taht <<a href="mailto:dave.taht@gmail.com" target="_blank">dave.taht@gmail.com</a>> wrote:<br>
<br>
should we always wait a little bit to see if we can form an aggregate?<br>
</blockquote>
<br>
I thought the consensus on this front was “no”, as long as we’re making the decision when we have an immediate transmit opportunity.<br>
</blockquote>
<br>
I think it is more nuanced than how david lang has presented it.<br>
</blockquote>
<br></span>
I have four reasons for arguing for no speculative delays.<br>
<br>
1. airtime that isn't used can't be saved.<br>
<br>
2. lower best-case latency<br>
<br>
3. simpler code<br>
<br>
4. clean, and gradual service degredation under load.<br>
<br>
the arguments against are:<br>
<br>
5. throughput per ms of transmit time is better if aggregation happens than if it doesn't.<br>
<br>
6. if you don't transmit, some other station may choose to before you would have finished.<br>
<br>
#2 is obvious, but with the caviot that anytime you transmit you may be delaying someone else.<br>
<br>
#1 and #6 are flip sides of each other. we want _someone_ to use the airtime, the question is who.<br>
<br>
#3 and #4 are closely related.<br>
<br>
If you follow my approach (transmit immediately if you can, aggregate when you have a queue), the code really has one mode (plus queuing). "If you have a Transmit Oppertunity, transmit up to X packets from the queue", and it doesn't matter if it's only one packet.<br>
<br>
If you delay the first packet to give you a chance to aggregate it with others, you add in the complexity and overhead of timers (including cancelling timers, slippage in timers, etc) and you add "first packet, start timers" mode to deal with.<br>
<br>
I grant you that the first approach will "saturate" the airtime at lower traffic levels, but at that point all the stations will start aggregating the minimum amount needed to keep the air saturated, while still minimizing latency.<br>
<br>
I then expect that application related optimizations would then further complicate the second approach. there are just too many cases where small amounts of data have to be sent and other things serialize behind them.<br>
<br>
DNS lookup to find a domain to then to a 3-way handshake to then do a request to see if the <web something> library has been updated since last cached (repeat for several libraries) to then fetch the actual page content. All of these thing up to the actual page content could be single packets that have to be sent (and responded to with a single packet), waiting for the prior one to complete. If you add a few ms to each of these, you can easily hit 100ms in added latency. Once you start to try and special cases these sorts of things, the code complexity multiplies.<br>
<br>
So I believe that the KISS approach ends up with a 'worse is better' situation.<span><font color="#888888"><br>
<br>
David Lang</font></span><br></div></div><span>_______________________________________________<br>
Make-wifi-fast mailing list<br>
<a href="mailto:Make-wifi-fast@lists.bufferbloat.net" target="_blank">Make-wifi-fast@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/make-wifi-fast" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/make-wifi-fast</a><br>
<br></span></blockquote></div><br></div>
</blockquote></div><br><br clear="all"><div><br></div></div></div><span>-- <br><div>Dave Täht<br>Let's go make home routers and wifi faster! With better software!<br><a href="http://blog.cerowrt.org" target="_blank">http://blog.cerowrt.org</a></div>
</span></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>