<div dir="ltr"><div class="gmail_default" style="font-size:small"><br></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Jun 25, 2018 at 6:38 AM Toke Høiland-Jørgensen <<a href="mailto:toke@toke.dk">toke@toke.dk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Michael Richardson <<a href="mailto:mcr@sandelman.ca" target="_blank">mcr@sandelman.ca</a>> writes:<br>
<br>
> Jonathan Morton <<a href="mailto:chromatix99@gmail.com" target="_blank">chromatix99@gmail.com</a>> wrote:<br>
> >>> I would instead frame the problem as "how can we get hardware to<br>
> >>> incorporate extra packets, which arrive between the request and grant<br>
> >>> phases of the MAC, into the same TXOP?" Then we no longer need to<br>
> >>> think probabilistically, or induce unnecessary delay in the case that<br>
> >>> no further packets arrive.<br>
> >><br>
> >> I've never looked at the ring/buffer/descriptor structure of the ath9k, but<br>
> >> with most ethernet devices, they would just continue reading descriptors<br>
> >> until it was empty. Is there some reason that something similar can not<br>
> >> occur?<br>
> >><br>
> >> Or is the problem at a higher level?<br>
> >> Or is that we don't want to enqueue packets so early, because it's a source<br>
> >> of bloat?<br>
><br>
> > The question is of when the aggregate frame is constructed and<br>
> > "frozen", using only the packets in the queue at that instant. When<br>
> > the MAC grant occurs, transmission must begin immediately, so most<br>
> > hardware prepares the frame in advance of that moment - but how far in<br>
> > advance?<br>
><br>
> Oh, I understand now. The aggregate frame has to be constructed, and it's<br>
> this frame that is actually in the xmit queue. I'm guessing that it's in the<br>
> hardware, because if it was in the driver, then we could perhaps do<br>
> something?<br>
<br>
No, it's in the driver for ath9k. So it would be possible to delay it<br>
slightly to try to build a larger one. The timing constraints are too<br>
tight to do it reactively when the request is granted, though; so<br>
delaying would result in idleness if there are no other flows to queue<br>
before then...<br>
<br>
Even for devices that build aggregates in firmware or hardware (as all<br>
AC chipsets do), it might be possible to throttle the queues at higher<br>
levels to try to get better batching. It's just not obvious that there's<br>
an algorithm that can do this in a way that will "do no harm" for other<br>
types of traffic, for instance...<br>
<br><br></blockquote><div><div class="gmail_default" style="font-size:small;display:inline"></div><div class="gmail_default" style="font-size:small;display:inline"></div><div class="gmail_default" style="font-size:small;display:inline">Isn't this sort of delay a natural consequence of a busy channel?</div></div><div><div class="gmail_default" style="font-size:small;display:inline"><br></div></div><div><div class="gmail_default" style="font-size:small;display:inline">What matters is not conserving txops *all the time*, but only when the channel is busy and there aren't more txops available....</div></div><div><div class="gmail_default" style="font-size:small;display:inline"><br></div></div><div><div class="gmail_default" style="font-size:small;display:inline">So when you are trying to transmit on a busy channel, that contention time will naturally increase, since you won't</div></div><div><div class="gmail_default" style="font-size:small;display:inline">be able to get a transmit opportunity immediately. So you should queue up more packets into an aggregate in that case.</div></div><div><div class="gmail_default" style="font-size:small;display:inline"><br></div></div><div><div class="gmail_default" style="font-size:small;display:inline">We only care about conserving txops when they are scarce, not when they are abundant.</div></div><div><div class="gmail_default" style="font-size:small;display:inline"><br></div></div><div><div class="gmail_default" style="font-size:small;display:inline">This principle is why a window system as crazy as X11 is competitive: it naturally becomes more efficient in the</div></div><div><div class="gmail_default" style="font-size:small;display:inline">face of load (more and more requests batch up and are handled at maximum efficiency, so the system is at maximum</div></div><div><div class="gmail_default" style="font-size:small;display:inline">efficiency at full load.</div></div><div><div class="gmail_default" style="font-size:small;display:inline"><br></div></div><div><div class="gmail_default" style="font-size:small;display:inline">Or am I missing something here?</div></div><div><div class="gmail_default" style="font-size:small;display:inline"><br></div></div><div><div class="gmail_default" style="font-size:small;display:inline">Jim</div></div><div><div class="gmail_default" style="font-size:small;display:inline"><br></div></div></div></div>