From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk0-x22a.google.com (mail-qk0-x22a.google.com [IPv6:2607:f8b0:400d:c09::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 381B13B29E for ; Fri, 1 Dec 2017 13:57:16 -0500 (EST) Received: by mail-qk0-x22a.google.com with SMTP id a71so14389852qkc.9 for ; Fri, 01 Dec 2017 10:57:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=M2FMmzOCMDGUp7r75sLkO5UyVIFUlM5HQi/CfqV1EoA=; b=H87CUoHg3xRxSDuy39Nw0ulkrqRSdLKNSEY9Z14ZaJFeuupb/Pr6FbV8A41lkD+VIk 9+gzqc9BZK6mSG6bwvNlmTIWFzHMhFv0sivQXv3Y+uNH8eluZ0AS7UU0ySUumlablmBE rdWwilh9vb/whRQ2D/cqlrMW1pdmc6k+Iaf15ty6Glcq0NHT/nbPa3O9RtxI3dyWgaPH c5EwtkPuX3vDc2OAf9PxzKDclwAvt0l+61yMxn6vvD9LldxtdI1WeRqEHwcw+P187D5p qZWCmnn1P31D7d13tSr4pqX8T+zcpt9Rd78Mn2D50NQlAhw4WaLyRGIET7i2oZRqmbYb 6lfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=M2FMmzOCMDGUp7r75sLkO5UyVIFUlM5HQi/CfqV1EoA=; b=PCRDqrWvbLqpNJIrLSEmtYNycAyXVVv/cBY03IGvBE7yZ8Ee4nMsMpSGfu09tCz7LT mies8mPkg3Wx0ExHX9rJ2bTQlzuEWT7EeuEIHRkPqCqUZmaBdlkyn4jE4HuDkvqFyQsq jBvXeQhDZahQujqePi82tfeNTUcT3k2UMVDbba7SDYPFJUfSyenx+klc+Qvui+XML+NC Yu8Dw6QNH9vOSSUmk8v0odv6GRoo0RZM5DOoEj6YGXvLxo+qw68bK2e36u48smKCTt0q kTRuNz5LmH49FsQhx0bHsQszvoPJ4HBbIFKOhNBxplNTY0KS1r9LQnQdawv6HAQrEcEJ LtDw== X-Gm-Message-State: AKGB3mLcb4xMGfJp0zd+B4UqzMOfx1GV8SlF0nM7+3a0y9A4w4hCO8m8 EoQxpDwpXHljnUC1UCgDvp2A7FTVlmCXku457DI= X-Google-Smtp-Source: AGs4zMZAmpY8klNshXyFS7XOwHb83dxfrrsZRBQZwfcZLFSaXG9l3rI4ED0cL/WP9tTz0k8w5EXYll/aXHCpLQ6quqI= X-Received: by 10.55.16.80 with SMTP id a77mr9208352qkh.250.1512154635673; Fri, 01 Dec 2017 10:57:15 -0800 (PST) MIME-Version: 1.0 References: <4D0E907C-E15D-437C-B6F7-FF348346D615@gmx.de> <7B92DF4D-B6B5-4A64-9E10-119DCA2D4A6F@ifi.uio.no> <1512037480.19682.10.camel@gmail.com> <401BE104-CFCC-449C-9904-A7C118B869E2@gmx.de> <874lpagugl.fsf@nemesis.taht.net> In-Reply-To: <874lpagugl.fsf@nemesis.taht.net> From: Luca Muscariello Date: Fri, 01 Dec 2017 18:57:05 +0000 Message-ID: To: Dave Taht Cc: Sebastian Moeller , bloat Content-Type: multipart/alternative; boundary="001a1145bc44e7a550055f4befae" Subject: Re: [Bloat] benefits of ack filtering X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Dec 2017 18:57:16 -0000 --001a1145bc44e7a550055f4befae Content-Type: text/plain; charset="UTF-8" https://www.cisco.com/c/en/us/products/collateral/wireless/aironet-3700-series/white-paper-c11-735947.html On Fri 1 Dec 2017 at 19:43, Dave Taht wrote: > Luca Muscariello writes: > > > For highly asymmetric links, but also shared media like wifi, QUIC might > be a > > better playground for optimisations. > > Not pervasive as TCP though and maybe off topic in this thread. > > I happen to really like QUIC, but a netperf-style tool did not exist for > it when I last looked, last year. > > Also getting to emulating DASH traffic is on my list. > > > > > If the downlink is what one want to optimise, using FEC in the > downstream, in > > conjunction with flow control could be very effective. > > No need to send ACK frequently and having something like FQ_codel in the > > downstream would avoid fairness problems that might > > happen though. I don't know if FEC is still in QUIC and used. > > > > BTW, for wifi, the ACK stream can be compressed in aggregate of frames > and sent > > in bursts. This is similar to DOCSIS upstream. > > I wonder if this is a phenomenon that is visible in recent WiFi or just > > negligible. > > My guess is meraki deployed something and I think they are in in the top > 5 in the enterprise market. > > I see ubnt added airtime fairness (of some sort), recently. > > > > > On Fri, Dec 1, 2017 at 9:45 AM, Sebastian Moeller > wrote: > > > > Hi All, > > > > you do realize that the worst case is going to stay at 35KPPS? If we > assume > > simply that the 100Mbps download rate is not created by a single > flow but by > > many flows (say 70K flows) the discussed ACK frequency reduction > schemes > > will not work that well. So ACK thinning is a nice optimization, but > will > > not help the fact that some ISPs/link technologies simply are > asymmetric and > > the user will suffer under some traffic conditions. Now the 70K flow > example > > is too extreme, but the fact is at hight flow number with sparse > flows (so > > fewer ACKs per flow in the queue and fewer ACKs per flow reaching > the end > > NIC in a GRO-collection interval (I naively assume there is a > somewhat fixed > > but small interval in which packets of the same flow are collected > for GRO)) > > there will be problems. (Again, I am all for allowing the end user to > > configure ACK filtering thinning, but I would rather see ISPs sell > less > > imbalanced links ;) ) > > > > Best Regards > > Sebastian > > > > > > > > > On Dec 1, 2017, at 01:28, David Lang wrote: > > > > > > 35K PPS of acks is insane, one ack every ms is FAR more than > enough to do > > 'fast recovery', and outside the datacenter, one ack per 10ms is > probably > > more than enough. > > > > > > Assuming something that's not too assymetric, thinning out the > acks may > > not make any difference in the transfer rate of a single data flow > in one > > direction, but if you step back and realize that there may be a need > to > > transfer data in the other direction, things change here. > > > > > > If you have a fully symmetrical link, and are maxing it out in both > > direction, going from 35K PPs of aks competing with data packets and > gonig > > down to 1k PPS or 100 PPS (or 10 PPS) would result in a noticable > > improvement in the flow that the acks are competing against. > > > > > > Stop thinking in terms of single-flow benchmarks and near idle > 'upstream' > > paths. > > > > > > David Lang > > > _______________________________________________ > > > Bloat mailing list > > > Bloat@lists.bufferbloat.net > > > https://lists.bufferbloat.net/listinfo/bloat > > > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > > > > > > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > --001a1145bc44e7a550055f4befae Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


On Fri 1 Dec 2017 at 19:43, Dave Taht &l= t;dave@taht.net> wrote:
Luca Muscariello <luca.muscariello@gmail.com> w= rites:

> For highly asymmetric links, but also shared media like wifi, QUIC mig= ht be a
> better playground for optimisations.
> Not pervasive as TCP though and maybe off topic in this thread.

I happen to really like QUIC, but a netperf-style tool did not exist for it when I last looked, last year.

Also getting to emulating DASH traffic is on my list.

>
> If the downlink is what one want to optimise, using FEC in the downstr= eam, in
> conjunction with flow control could be very effective.
> No need to send ACK frequently and having something like FQ_codel in t= he
> downstream would avoid fairness problems that might
> happen though. I don't know if FEC is still in QUIC and used.
>
> BTW, for wifi, the ACK stream can be compressed in aggregate of frames= and sent
> in bursts. This is similar to DOCSIS upstream.
> I wonder if this is a phenomenon that is visible in recent WiFi or jus= t
> negligible.

My guess is meraki deployed something and I think they are in in the top 5 in the enterprise market.

I see ubnt added airtime fairness (of some sort), recently.

>
> On Fri, Dec 1, 2017 at 9:45 AM, Sebastian Moeller <moeller0@gmx.de> wrote:
>
>=C2=A0 =C2=A0 =C2=A0Hi All,
>
>=C2=A0 =C2=A0 =C2=A0you do realize that the worst case is going to stay= at 35KPPS? If we assume
>=C2=A0 =C2=A0 =C2=A0simply that the 100Mbps download rate is not create= d by a single flow but by
>=C2=A0 =C2=A0 =C2=A0many flows (say 70K flows) the discussed ACK freque= ncy reduction schemes
>=C2=A0 =C2=A0 =C2=A0will not work that well. So ACK thinning is a nice = optimization, but will
>=C2=A0 =C2=A0 =C2=A0not help the fact that some ISPs/link technologies = simply are asymmetric and
>=C2=A0 =C2=A0 =C2=A0the user will suffer under some traffic conditions.= Now the 70K flow example
>=C2=A0 =C2=A0 =C2=A0is too extreme, but the fact is at hight flow numbe= r with sparse flows (so
>=C2=A0 =C2=A0 =C2=A0fewer ACKs per flow in the queue and fewer ACKs per= flow reaching the end
>=C2=A0 =C2=A0 =C2=A0NIC in a GRO-collection interval (I naively assume = there is a somewhat fixed
>=C2=A0 =C2=A0 =C2=A0but small interval in which packets of the same flo= w are collected for GRO))
>=C2=A0 =C2=A0 =C2=A0there will be problems. (Again, I am all for allowi= ng the end user to
>=C2=A0 =C2=A0 =C2=A0configure ACK filtering thinning, but I would rathe= r see ISPs sell less
>=C2=A0 =C2=A0 =C2=A0imbalanced links ;) )
>
>=C2=A0 =C2=A0 =C2=A0Best Regards
>=C2=A0 =C2=A0 =C2=A0Sebastian
>
>
>
>=C2=A0 =C2=A0 =C2=A0> On Dec 1, 2017, at 01:28, David Lang <david@lang.hm> wrote:<= br> >=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> 35K PPS of acks is insane, one ack every ms is= FAR more than enough to do
>=C2=A0 =C2=A0 =C2=A0'fast recovery', and outside the datacenter= , one ack per 10ms is probably
>=C2=A0 =C2=A0 =C2=A0more than enough.
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> Assuming something that's not too assymetr= ic, thinning out the acks may
>=C2=A0 =C2=A0 =C2=A0not make any difference in the transfer rate of a s= ingle data flow in one
>=C2=A0 =C2=A0 =C2=A0direction, but if you step back and realize that th= ere may be a need to
>=C2=A0 =C2=A0 =C2=A0transfer data in the other direction, things change= here.
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> If you have a fully symmetrical link, and are = maxing it out in both
>=C2=A0 =C2=A0 =C2=A0direction, going from 35K PPs of aks competing with= data packets and gonig
>=C2=A0 =C2=A0 =C2=A0down to 1k PPS or 100 PPS (or 10 PPS) would result = in a noticable
>=C2=A0 =C2=A0 =C2=A0improvement in the flow that the acks are competing= against.
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> Stop thinking in terms of single-flow benchmar= ks and near idle 'upstream'
>=C2=A0 =C2=A0 =C2=A0paths.
>=C2=A0 =C2=A0 =C2=A0>
>=C2=A0 =C2=A0 =C2=A0> David Lang
>=C2=A0 =C2=A0 =C2=A0> ______________________________________________= _
>=C2=A0 =C2=A0 =C2=A0> Bloat mailing list
>=C2=A0 =C2=A0 =C2=A0> Bloat@lists.bufferbloat.net
>=C2=A0 =C2=A0 =C2=A0> https://lists.bufferbloat.n= et/listinfo/bloat
>
>=C2=A0 =C2=A0 =C2=A0_______________________________________________
>=C2=A0 =C2=A0 =C2=A0Bloat mailing list
>=C2=A0 =C2=A0 =C2=A0Bloat@lists.bufferbloat.net
>=C2=A0 =C2=A0 =C2=A0https://lists.bufferbloat.net/li= stinfo/bloat
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat= @lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--001a1145bc44e7a550055f4befae--