From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk0-x22f.google.com (mail-qk0-x22f.google.com [IPv6:2607:f8b0:400d:c09::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 9DE3D3B2A4 for ; Fri, 1 Dec 2017 05:45:20 -0500 (EST) Received: by mail-qk0-x22f.google.com with SMTP id c13so12595605qke.2 for ; Fri, 01 Dec 2017 02:45:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=arJa/57nywDqG3OXPL1d9e+a/sjkhsWiZJQeSn8D5uI=; b=jqaQSmbycxIJRfDGuxuzYjcsnM1K8Q/iAbRqB7exqVV3DpeGaDIINfRph8M1t0l1je v9n1EHVZWjZAGn7JsbUXLikyX378MIcfGIC2s2iq6hOUtybh3p4T1nWrQm6rx0v0l6C/ TttB9IJmdpq1FG3Q2N3jE8IqaZsETsad3O+p+Eljwp/S0oRGiM/3Dnbv2R8IBoITwxrZ lKOgMu5/FRmIi3znaEG4+u8tVw7jHGEnQLFnEKsw8EMSVc92wKQlIuQm3x3sytQ72RRR km+6hdrznKI1Ho+FVUszLa2309OjNEVYTyZL9B4hXqZUPzjty0iiTrll3hlgOdg/eKbJ uNGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=arJa/57nywDqG3OXPL1d9e+a/sjkhsWiZJQeSn8D5uI=; b=KxFSc4AZBYo++9kbujmbY7AL0ipkwlVlZ7nbHHaWzj0nDAukWRCK91cJnzfcZb3i9q NnV9Ykf0FyVge3dBbN2pRelcdgmDSvviEzZTTXymVxXlgyOV12U4u8OukIsddrIoFy1B ASXM8+LDTUw/Lvr5U8BVhHMuJ5lgU2j7sgEN/Z514y00sIO1ab6ROaMplr00W12ZgvOf zI+7JVfzMAr965Q3ExEPTiNm61FjYBrHpJEK6TSxsFS0mm1uCxKVUlbLrtH/fxRaWONp Q/NXA2PUn4xsRCJJKQNSRNCnGu/+xIB4lQEge3Bx+ABHV660+8mtGBf/mKjKzB1tPPOL e/2w== X-Gm-Message-State: AKGB3mIwLOJJJAb1jDQEVVMvP+Mn0WTBfFFmvy47uK2HSqV2RpHlaLEe LvYw6ZQw640h0/MY660Xuwn+UjGvcORhjDpnfic= X-Google-Smtp-Source: AGs4zMao5eFq4hPM7mAPqEC9bhece+BAc8DfyCz9XRg+I87JiiJPCF3n7wOWHItRuE0bYm8g8kV1xvxarBMu4hLQCtI= X-Received: by 10.55.157.133 with SMTP id g127mr6786739qke.280.1512125120214; Fri, 01 Dec 2017 02:45:20 -0800 (PST) MIME-Version: 1.0 Received: by 10.12.191.227 with HTTP; Fri, 1 Dec 2017 02:45:19 -0800 (PST) In-Reply-To: <401BE104-CFCC-449C-9904-A7C118B869E2@gmx.de> References: <4D0E907C-E15D-437C-B6F7-FF348346D615@gmx.de> <7B92DF4D-B6B5-4A64-9E10-119DCA2D4A6F@ifi.uio.no> <1512037480.19682.10.camel@gmail.com> <401BE104-CFCC-449C-9904-A7C118B869E2@gmx.de> From: Luca Muscariello Date: Fri, 1 Dec 2017 11:45:19 +0100 Message-ID: To: Sebastian Moeller Cc: David Lang , bloat Content-Type: multipart/alternative; boundary="001a114d8ad4a579d0055f451040" Subject: Re: [Bloat] benefits of ack filtering X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 Dec 2017 10:45:20 -0000 --001a114d8ad4a579d0055f451040 Content-Type: text/plain; charset="UTF-8" For highly asymmetric links, but also shared media like wifi, QUIC might be a better playground for optimisations. Not pervasive as TCP though and maybe off topic in this thread. If the downlink is what one want to optimise, using FEC in the downstream, in conjunction with flow control could be very effective. No need to send ACK frequently and having something like FQ_codel in the downstream would avoid fairness problems that might happen though. I don't know if FEC is still in QUIC and used. BTW, for wifi, the ACK stream can be compressed in aggregate of frames and sent in bursts. This is similar to DOCSIS upstream. I wonder if this is a phenomenon that is visible in recent WiFi or just negligible. On Fri, Dec 1, 2017 at 9:45 AM, Sebastian Moeller wrote: > Hi All, > > you do realize that the worst case is going to stay at 35KPPS? If we > assume simply that the 100Mbps download rate is not created by a single > flow but by many flows (say 70K flows) the discussed ACK frequency > reduction schemes will not work that well. So ACK thinning is a nice > optimization, but will not help the fact that some ISPs/link technologies > simply are asymmetric and the user will suffer under some traffic > conditions. Now the 70K flow example is too extreme, but the fact is at > hight flow number with sparse flows (so fewer ACKs per flow in the queue > and fewer ACKs per flow reaching the end NIC in a GRO-collection interval > (I naively assume there is a somewhat fixed but small interval in which > packets of the same flow are collected for GRO)) there will be problems. > (Again, I am all for allowing the end user to configure ACK filtering > thinning, but I would rather see ISPs sell less imbalanced links ;) ) > > Best Regards > Sebastian > > > On Dec 1, 2017, at 01:28, David Lang wrote: > > > > 35K PPS of acks is insane, one ack every ms is FAR more than enough to > do 'fast recovery', and outside the datacenter, one ack per 10ms is > probably more than enough. > > > > Assuming something that's not too assymetric, thinning out the acks may > not make any difference in the transfer rate of a single data flow in one > direction, but if you step back and realize that there may be a need to > transfer data in the other direction, things change here. > > > > If you have a fully symmetrical link, and are maxing it out in both > direction, going from 35K PPs of aks competing with data packets and gonig > down to 1k PPS or 100 PPS (or 10 PPS) would result in a noticable > improvement in the flow that the acks are competing against. > > > > Stop thinking in terms of single-flow benchmarks and near idle > 'upstream' paths. > > > > David Lang > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > --001a114d8ad4a579d0055f451040 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
For highly asymmetric links, but also shared media like wi= fi, QUIC might be a better playground for optimisations.
Not perva= sive as TCP though and maybe off topic in this thread.

=
If the downlink is what one want to optimise, using FEC in the do= wnstream, in conjunction with flow control could be very effective.
No need to send ACK frequently and having something like FQ_codel in the= downstream would avoid fairness problems that might
happen thoug= h. I don't know if FEC is still in QUIC and used.

<= /div>

BTW, for wifi, the ACK stream can be compressed in= aggregate of frames and sent in bursts. This is similar to DOCSIS upstream= .
I wonder if this is a phenomenon that is visible in recent=C2= =A0 WiFi or just negligible.










On Fri, Dec 1, 2017 at 9:45 AM, Sebastian Moeller <moeller0@= gmx.de> wrote:
Hi All,

you do realize that the worst case is going to stay at 35KPPS? If we assume= simply that the 100Mbps download rate is not created by a single flow but = by many flows (say 70K flows) the discussed ACK frequency reduction schemes= will not work that well. So ACK thinning is a nice optimization, but will = not help the fact that some ISPs/link technologies simply are asymmetric an= d the user will suffer under some traffic conditions. Now the 70K flow exam= ple is too extreme, but the fact is at hight flow number with sparse flows = (so fewer ACKs per flow in the queue and fewer ACKs per flow reaching the e= nd NIC in a GRO-collection interval (I naively assume there is a somewhat f= ixed but small interval in which packets of the same flow are collected for= GRO)) there will be problems. (Again, I am all for allowing the end user t= o configure ACK filtering thinning, but I would rather see ISPs sell less i= mbalanced links ;) )

Best Regards
=C2=A0 =C2=A0 =C2=A0 =C2=A0 = Sebastian

> On Dec 1, 2017, at 01:28, David Lang <david@lang.hm> wrote:
>
> 35K PPS of acks is insane, one ack every ms is FAR more than enough to= do 'fast recovery', and outside the datacenter, one ack per 10ms i= s probably more than enough.
>
> Assuming something that's not too assymetric, thinning out the ack= s may not make any difference in the transfer rate of a single data flow in= one direction, but if you step back and realize that there may be a need t= o transfer data in the other direction, things change here.
>
> If you have a fully symmetrical link, and are maxing it out in both di= rection, going from 35K PPs of aks competing with data packets and gonig do= wn to 1k PPS or 100 PPS (or 10 PPS) would result in a noticable improvement= in the flow that the acks are competing against.
>
> Stop thinking in terms of single-flow benchmarks and near idle 'up= stream' paths.
>
> David Lang
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat= .net
> https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net<= /a>
https://lists.bufferbloat.net/listinfo/bloat

--001a114d8ad4a579d0055f451040--