[Bloat] lwn.net's tcp small queues vs wifi aggregation solved
Sebastian Moeller
moeller0 at gmx.de
Thu Jun 21 15:41:26 EDT 2018
Hi All,
> On Jun 21, 2018, at 21:17, Dave Taht <dave.taht at gmail.com> wrote:
>
> On Thu, Jun 21, 2018 at 9:43 AM, Kathleen Nichols <nichols at pollere.com> wrote:
>> On 6/21/18 8:18 AM, Dave Taht wrote:
>>
>>> This is a case where inserting a teeny bit more latency to fill up the
>>> queue (ugh!), or a driver having some way to ask the probability of
>>> seeing more data in the
>>> next 10us, or... something like that, could help.
>>>
>>
>> Well, if the driver sees the arriving packets, it could infer that an
>> ack will be produced shortly and will need a sending opportunity.
>
> Certainly in the case of wifi and lte and other simplex technologies
> this seems feasible...
>
> 'cept that we're all busy finding ways to do ack compression this
> month and thus the
> two big tcp packets = 1 ack rule is going away. Still, an estimate,
> with a short timeout
> might help.
That short timeout seems essential, just because a link is wireless, does not mean the ACKs for passing TCP packets will appear shortly, who knows what routing happens after the wireless link (think city-wide mesh network). In a way such a solution should first figure out whether waiting has any chance of being useful, by looking at te typical delay between Data packets and the matching ACKs.
>
> Another thing I've longed for (sometimes) is whether or not an
> application like a web
> browser signalling the OS that it has a batch of network packets
> coming would help...
To make up for the fact that wireless uses unfortunately uses a very high per packet overhead it just tries to "hide" by amortizing it over more than one data packet. How about trying to find a better, less wasteful MAC instead ;) (and now we have two problems...) Now really from a latency perspective it clearly is better to ovoid overhead instead of use "batching" to better amortize it since batching increases latency (I stipulate that there are condition in which clever batching will not increase the noticeable latency if it can hide inside another latency increasing process).
>
> web browser:
> setsockopt(batch_everything)
> parse the web page, generate all your dns, tcp requests, etc, etc
> setsockopt(release_batch)
>
>> Kathie
>>
>> (we tried this mechanism out for cable data head ends at Com21 and it
>> went into a patent that probably belongs to Arris now. But that was for
>> cable. It is a fact universally acknowledged that a packet of data must
>> be in want of an acknowledgement.)
>
> voip doesn't behave this way, but for recognisable protocols like tcp
> and perhaps quic...
I note that for voip, waiting does not make sense as all packets carry information and keeping jitter low will noticeably increase a calls perceived quality (if just by allowing the application yo use a small de-jitter buffer and hence less latency). There is a reason why wifi's voice access class, oith has the highest probability to get the next tx-slot and also is not allowed to send aggregates (whether that is fully sane is another question, answering which I do not feel competent).
I also think that on a docsis system it is probably a decent heuristic to assume that the endpoints will be a few milliseconds away at most (and only due to the coarse docsis grant-request clock).
Best Regards
Sebastian
>
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
>
> Dave Täht
> CEO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-669-226-2619
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
More information about the Bloat
mailing list