General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Sebastian Moeller <moeller0@gmx.de>
To: "Dave Täht" <dave.taht@gmail.com>
Cc: Kathleen Nichols <nichols@pollere.com>,
	bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] lwn.net's tcp small queues vs wifi aggregation solved
Date: Thu, 21 Jun 2018 22:11:43 +0200	[thread overview]
Message-ID: <15B9974E-13DE-4D07-B8C3-4AB007172F43@gmx.de> (raw)
In-Reply-To: <CAA93jw5kD768UcWKGTEsT7-75o=OoUuDnbPiKy4s4zVaUNNzGA@mail.gmail.com>

Hi Dave,

> On Jun 21, 2018, at 21:54, Dave Taht <dave.taht@gmail.com> wrote:
> 
> On Thu, Jun 21, 2018 at 12:41 PM, Sebastian Moeller <moeller0@gmx.de> wrote:
>> Hi All,
>> 
>>> On Jun 21, 2018, at 21:17, Dave Taht <dave.taht@gmail.com> wrote:
>>> 
>>> On Thu, Jun 21, 2018 at 9:43 AM, Kathleen Nichols <nichols@pollere.com> wrote:
>>>> On 6/21/18 8:18 AM, Dave Taht wrote:
>>>> 
>>>>> This is a case where inserting a teeny bit more latency to fill up the
>>>>> queue (ugh!), or a driver having some way to ask the probability of
>>>>> seeing more data in the
>>>>> next 10us, or... something like that, could help.
>>>>> 
>>>> 
>>>> Well, if the driver sees the arriving packets, it could infer that an
>>>> ack will be produced shortly and will need a sending opportunity.
>>> 
>>> Certainly in the case of wifi and lte and other simplex technologies
>>> this seems feasible...
>>> 
>>> 'cept that we're all busy finding ways to do ack compression this
>>> month and thus the
>>> two big tcp packets = 1 ack rule is going away. Still, an estimate,
>>> with a short timeout
>>> might help.
>> 
>>        That short timeout seems essential, just because a link is wireless, does not mean the ACKs for passing TCP packets will appear shortly, who knows what routing happens after the wireless link (think city-wide mesh network). In a way such a solution should first figure out whether waiting has any chance of being useful, by looking at te typical delay between Data packets and the matching ACKs.
> 
> We are in this discussion, having a few issues with multiple contexts.
> Mine (and eric's) is in improving wifi clients (laptops, handhelds)
> behavior, where the tcp stack is local.

	Ah, sorry, I got this wrong and was looking at this from the APs perspective; sorry for the noise... and thanks for the patience

> 
> packet pairing estimates on routers... well, if you get an aggregate
> "in", you should be able to get an aggregate "out" when it traverses
> the same driver. routerwise, ack compression "done right" will help a
> bit... it's the "done right" part that's the sticking point.

	How will ACK compression help? If done aggressively it will sparse out the ACK stream potentially making aggregating ACK infeasible, no? On the other hand if sparse enough maybe not aggregating is not too painful? I guess I am just slow today...

Best Regards
	Sebastian

> 
>> 
>>> 
>>> Another thing I've longed for (sometimes) is whether or not an
>>> application like a web
>>> browser signalling the OS that it has a batch of network packets
>>> coming would help...
>> 
>>        To make up for the fact that wireless uses unfortunately uses a very high per packet overhead it just tries to "hide" by amortizing it over more than one data packet. How about trying to find a better, less wasteful MAC instead ;) (and now we have two problems...)
> 
> On my bad days I'd really like to have a do-over on wifi. The only
> hope I've had has been for LiFi or a ressurection of
> 
> I haven't poked into what's going on in 5G lately (the mac is
> "better", but towers being distant does not help), nor have I been
> tracking 802.11ax for a few years. Lower latency was all over the
> 802.11ax standard when I last paid attention.
> 
> Has 802.11ad gone anywhere?
> 
> 
>> Now really from a latency perspective it clearly is better to ovoid overhead instead of use "batching" to better amortize it since batching increases latency (I stipulate that there are condition in which clever batching will not increase the noticeable latency if it can hide inside another latency increasing process).
>> 
>>> 
>>> web browser:
>>> setsockopt(batch_everything)
>>> parse the web page, generate all your dns, tcp requests, etc, etc
>>> setsockopt(release_batch)
>>> 
>>>>       Kathie
>>>> 
>>>> (we tried this mechanism out for cable data head ends at Com21 and it
>>>> went into a patent that probably belongs to Arris now. But that was for
>>>> cable. It is a fact universally acknowledged that a packet of data must
>>>> be in want of an acknowledgement.)
>>> 
>>> voip doesn't behave this way, but for recognisable protocols like tcp
>>> and perhaps quic...
>> 
>>        I note that for voip, waiting does not make sense as all packets carry information and keeping jitter low will noticeably increase a calls perceived quality (if just by allowing the application yo use a small de-jitter buffer and hence less latency). There is a reason why wifi's voice access class, oith has the highest probability to get the next tx-slot and also is not allowed to send aggregates (whether that is fully sane is another question, answering which I do not feel competent).
>>        I also think that on a docsis system it is probably a decent heuristic to assume that the endpoints will be a few milliseconds away at most (and only due to the coarse docsis grant-request clock).
>> 
>> Best Regards
>>        Sebastian
>> 
>> 
>>> 
>>>> _______________________________________________
>>>> Bloat mailing list
>>>> Bloat@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/bloat
>>> 
>>> 
>>> 
>>> --
>>> 
>>> Dave Täht
>>> CEO, TekLibre, LLC
>>> http://www.teklibre.com
>>> Tel: 1-669-226-2619
>>> _______________________________________________
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>> 
> 
> 
> 
> -- 
> 
> Dave Täht
> CEO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-669-226-2619


  reply	other threads:[~2018-06-21 20:11 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-21  4:58 Dave Taht
2018-06-21  9:22 ` Toke Høiland-Jørgensen
2018-06-21 12:55   ` Eric Dumazet
2018-06-21 15:18     ` Dave Taht
2018-06-21 15:31       ` Caleb Cushing
2018-06-21 15:46         ` Stephen Hemminger
2018-06-21 17:41           ` Caleb Cushing
2018-06-21 15:50         ` Dave Taht
2018-06-21 16:29       ` David Collier-Brown
2018-06-21 16:54         ` Jonathan Morton
2018-06-21 16:43       ` Kathleen Nichols
2018-06-21 19:17         ` Dave Taht
2018-06-21 19:41           ` Sebastian Moeller
2018-06-21 19:51             ` Toke Høiland-Jørgensen
2018-06-21 19:54             ` Dave Taht
2018-06-21 20:11               ` Sebastian Moeller [this message]
2018-06-22 14:01           ` Kathleen Nichols
2018-06-22 14:12             ` Jonathan Morton
2018-06-22 14:49               ` Michael Richardson
2018-06-22 15:02                 ` Jonathan Morton
2018-06-22 21:55                   ` Michael Richardson
2018-06-25 10:38                     ` Toke Høiland-Jørgensen
2018-06-25 23:54                       ` Jim Gettys
2018-06-26  0:07                         ` Jonathan Morton
2018-06-26  0:21                           ` David Lang
2018-06-26  0:36                             ` Simon Barber
2018-06-26  0:44                               ` Jonathan Morton
2018-06-26  0:52                                 ` Jim Gettys
2018-06-26  0:56                                 ` David Lang
2018-06-26 11:16                                   ` Toke Høiland-Jørgensen
2018-06-26  1:27                                 ` Dave Taht
2018-06-26  3:30                                   ` Simon Barber

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=15B9974E-13DE-4D07-B8C3-4AB007172F43@gmx.de \
    --to=moeller0@gmx.de \
    --cc=bloat@lists.bufferbloat.net \
    --cc=dave.taht@gmail.com \
    --cc=nichols@pollere.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox